Test Report: QEMU_macOS 19546

                    
                      9c905d7ddc6fcb24a41b70e16c9a4a5dd3740602:2024-10-03:36493
                    
                

Test fail (100/275)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 38.8
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10
32 TestAddons/serial/GCPAuth/PullSecret 480.3
47 TestCertOptions 10
48 TestCertExpiration 195.32
49 TestDockerFlags 10.33
50 TestForceSystemdFlag 10.07
51 TestForceSystemdEnv 10.68
96 TestFunctional/parallel/ServiceCmdConnect 33.85
168 TestMultiControlPlane/serial/StopSecondaryNode 162.29
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 150.14
170 TestMultiControlPlane/serial/RestartSecondaryNode 185.35
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.1
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.57
173 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
175 TestMultiControlPlane/serial/StopCluster 300.23
176 TestMultiControlPlane/serial/RestartCluster 5.26
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
178 TestMultiControlPlane/serial/AddSecondaryNode 0.07
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
182 TestImageBuild/serial/Setup 10.02
185 TestJSONOutput/start/Command 9.67
191 TestJSONOutput/pause/Command 0.08
197 TestJSONOutput/unpause/Command 0.04
214 TestMinikubeProfile 10.23
217 TestMountStart/serial/StartWithMountFirst 10.51
220 TestMultiNode/serial/FreshStart2Nodes 9.84
221 TestMultiNode/serial/DeployApp2Nodes 88.6
222 TestMultiNode/serial/PingHostFrom2Pods 0.09
223 TestMultiNode/serial/AddNode 0.08
224 TestMultiNode/serial/MultiNodeLabels 0.06
225 TestMultiNode/serial/ProfileList 0.08
226 TestMultiNode/serial/CopyFile 0.06
227 TestMultiNode/serial/StopNode 0.14
228 TestMultiNode/serial/StartAfterStop 40.92
229 TestMultiNode/serial/RestartKeepsNodes 9.17
230 TestMultiNode/serial/DeleteNode 0.1
231 TestMultiNode/serial/StopMultiNode 1.96
232 TestMultiNode/serial/RestartMultiNode 5.27
233 TestMultiNode/serial/ValidateNameConflict 19.75
237 TestPreload 9.89
239 TestScheduledStopUnix 10.02
240 TestSkaffold 15.96
243 TestRunningBinaryUpgrade 621.33
245 TestKubernetesUpgrade 18.41
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.34
259 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.07
261 TestStoppedBinaryUpgrade/Upgrade 580.65
263 TestPause/serial/Start 9.96
273 TestNoKubernetes/serial/StartWithK8s 9.95
274 TestNoKubernetes/serial/StartWithStopK8s 6.36
275 TestNoKubernetes/serial/Start 5.83
279 TestNoKubernetes/serial/StartNoArgs 5.86
281 TestNetworkPlugins/group/auto/Start 9.71
282 TestNetworkPlugins/group/kindnet/Start 9.76
283 TestNetworkPlugins/group/calico/Start 9.8
284 TestNetworkPlugins/group/custom-flannel/Start 9.87
285 TestNetworkPlugins/group/false/Start 10.02
286 TestNetworkPlugins/group/enable-default-cni/Start 9.73
287 TestNetworkPlugins/group/flannel/Start 9.72
288 TestNetworkPlugins/group/bridge/Start 9.9
290 TestNetworkPlugins/group/kubenet/Start 9.81
292 TestStartStop/group/old-k8s-version/serial/FirstStart 9.83
293 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
294 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
297 TestStartStop/group/no-preload/serial/FirstStart 10
299 TestStartStop/group/old-k8s-version/serial/SecondStart 7.34
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.07
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
303 TestStartStop/group/old-k8s-version/serial/Pause 0.1
305 TestStartStop/group/embed-certs/serial/FirstStart 11.62
306 TestStartStop/group/no-preload/serial/DeployApp 0.1
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
310 TestStartStop/group/no-preload/serial/SecondStart 5.5
311 TestStartStop/group/embed-certs/serial/DeployApp 0.1
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
316 TestStartStop/group/no-preload/serial/Pause 0.1
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.89
321 TestStartStop/group/embed-certs/serial/SecondStart 6.63
322 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.09
327 TestStartStop/group/embed-certs/serial/Pause 0.11
330 TestStartStop/group/newest-cni/serial/FirstStart 10
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.73
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.07
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
341 TestStartStop/group/newest-cni/serial/SecondStart 5.26
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
345 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (38.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-360000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-360000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (38.795223792s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e9cf34f9-1748-4156-8fa7-dd68309d22a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-360000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"53c04b14-f563-4206-8588-50d7b5c1ecfb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19546"}}
	{"specversion":"1.0","id":"d826b504-c45c-464f-9599-e7683a8d4805","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig"}}
	{"specversion":"1.0","id":"fb5ef2a2-53a9-475e-be1c-a4af05b8da87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ceb0fc59-b12f-439b-a77b-cb3bf9c68d74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"666abf1b-9285-48a6-9638-655284d8d66a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube"}}
	{"specversion":"1.0","id":"eb199e55-8fcf-4b23-ac03-df31bb56747d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"943c5946-bdf2-4bcf-8281-6c0e78d37019","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ad9f1fd-6e3a-4c43-908b-318f265e51c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"a7ac542d-1b5a-4e50-b9d0-763242f9011d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d143fc31-df91-4bac-b6e2-08743e3771bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-360000\" primary control-plane node in \"download-only-360000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bdf7d6e0-eb15-4616-a7e9-d37ddd3a9d7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3fd92ca-3175-431b-ae63-d81516a95df1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0] Decompressors:map[bz2:0x1400000f790 gz:0x1400000f798 tar:0x1400000f740 tar.bz2:0x1400000f750 tar.gz:0x1400000f760 tar.xz:0x1400000f770 tar.zst:0x1400000f780 tbz2:0x1400000f750 tgz:0x14
00000f760 txz:0x1400000f770 tzst:0x1400000f780 xz:0x1400000f7a0 zip:0x1400000f7b0 zst:0x1400000f7a8] Getters:map[file:0x14000464770 http:0x14000746460 https:0x14000746730] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"d67dc67d-71dd-44ba-bdd2-0caed514c23c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:47:27.309002    1557 out.go:345] Setting OutFile to fd 1 ...
	I1003 19:47:27.309156    1557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 19:47:27.309159    1557 out.go:358] Setting ErrFile to fd 2...
	I1003 19:47:27.309162    1557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 19:47:27.309293    1557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	W1003 19:47:27.309397    1557 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19546-1040/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19546-1040/.minikube/config/config.json: no such file or directory
	I1003 19:47:27.310814    1557 out.go:352] Setting JSON to true
	I1003 19:47:27.330098    1557 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1018,"bootTime":1728009029,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 19:47:27.330157    1557 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 19:47:27.335722    1557 out.go:97] [download-only-360000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 19:47:27.335887    1557 notify.go:220] Checking for updates...
	W1003 19:47:27.335896    1557 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball: no such file or directory
	I1003 19:47:27.338688    1557 out.go:169] MINIKUBE_LOCATION=19546
	I1003 19:47:27.339876    1557 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 19:47:27.343721    1557 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 19:47:27.350671    1557 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:47:27.357657    1557 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	W1003 19:47:27.364714    1557 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 19:47:27.364941    1557 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 19:47:27.369630    1557 out.go:97] Using the qemu2 driver based on user configuration
	I1003 19:47:27.369651    1557 start.go:297] selected driver: qemu2
	I1003 19:47:27.369669    1557 start.go:901] validating driver "qemu2" against <nil>
	I1003 19:47:27.369782    1557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 19:47:27.373677    1557 out.go:169] Automatically selected the socket_vmnet network
	I1003 19:47:27.379727    1557 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1003 19:47:27.379851    1557 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 19:47:27.379898    1557 cni.go:84] Creating CNI manager for ""
	I1003 19:47:27.379940    1557 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 19:47:27.379984    1557 start.go:340] cluster config:
	{Name:download-only-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:47:27.384764    1557 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:47:27.388737    1557 out.go:97] Downloading VM boot image ...
	I1003 19:47:27.388755    1557 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I1003 19:47:45.037323    1557 out.go:97] Starting "download-only-360000" primary control-plane node in "download-only-360000" cluster
	I1003 19:47:45.037348    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1003 19:47:45.297778    1557 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1003 19:47:45.297899    1557 cache.go:56] Caching tarball of preloaded images
	I1003 19:47:45.298785    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1003 19:47:45.302742    1557 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1003 19:47:45.302768    1557 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 19:47:45.860385    1557 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1003 19:48:04.715150    1557 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 19:48:04.715305    1557 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 19:48:05.410015    1557 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1003 19:48:05.410245    1557 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/download-only-360000/config.json ...
	I1003 19:48:05.410262    1557 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/download-only-360000/config.json: {Name:mk177ee186f2f53615699c35126b62254166afca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:48:05.410532    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1003 19:48:05.410775    1557 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1003 19:48:06.028044    1557 out.go:193] 
	W1003 19:48:06.032165    1557 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0] Decompressors:map[bz2:0x1400000f790 gz:0x1400000f798 tar:0x1400000f740 tar.bz2:0x1400000f750 tar.gz:0x1400000f760 tar.xz:0x1400000f770 tar.zst:0x1400000f780 tbz2:0x1400000f750 tgz:0x1400000f760 txz:0x1400000f770 tzst:0x1400000f780 xz:0x1400000f7a0 zip:0x1400000f7b0 zst:0x1400000f7a8] Getters:map[file:0x14000464770 http:0x14000746460 https:0x14000746730] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1003 19:48:06.032193    1557 out_reason.go:110] 
	W1003 19:48:06.039034    1557 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:48:06.043060    1557 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-360000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (38.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-795000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-795000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.855858s)

                                                
                                                
-- stdout --
	* [offline-docker-795000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-795000" primary control-plane node in "offline-docker-795000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-795000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:40:07.119369    3999 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:40:07.119528    3999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:40:07.119530    3999 out.go:358] Setting ErrFile to fd 2...
	I1003 20:40:07.119535    3999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:40:07.119666    3999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:40:07.120799    3999 out.go:352] Setting JSON to false
	I1003 20:40:07.140443    3999 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4178,"bootTime":1728009029,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:40:07.140509    3999 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:40:07.144533    3999 out.go:177] * [offline-docker-795000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:40:07.152751    3999 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:40:07.152774    3999 notify.go:220] Checking for updates...
	I1003 20:40:07.159748    3999 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:40:07.162725    3999 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:40:07.165634    3999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:40:07.168695    3999 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:40:07.171763    3999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:40:07.175227    3999 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:40:07.175294    3999 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:40:07.178661    3999 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:40:07.185649    3999 start.go:297] selected driver: qemu2
	I1003 20:40:07.185663    3999 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:40:07.185670    3999 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:40:07.187867    3999 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:40:07.190660    3999 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:40:07.193766    3999 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:40:07.193784    3999 cni.go:84] Creating CNI manager for ""
	I1003 20:40:07.193812    3999 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:40:07.193816    3999 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:40:07.193856    3999 start.go:340] cluster config:
	{Name:offline-docker-795000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-795000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/b
in/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:40:07.198781    3999 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:40:07.202646    3999 out.go:177] * Starting "offline-docker-795000" primary control-plane node in "offline-docker-795000" cluster
	I1003 20:40:07.206709    3999 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:40:07.206733    3999 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:40:07.206742    3999 cache.go:56] Caching tarball of preloaded images
	I1003 20:40:07.206824    3999 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:40:07.206829    3999 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:40:07.206892    3999 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/offline-docker-795000/config.json ...
	I1003 20:40:07.206901    3999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/offline-docker-795000/config.json: {Name:mkc24906337b180f739aa7a277ab5e8b4d318739 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:40:07.207151    3999 start.go:360] acquireMachinesLock for offline-docker-795000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:40:07.207194    3999 start.go:364] duration metric: took 37.25µs to acquireMachinesLock for "offline-docker-795000"
	I1003 20:40:07.207205    3999 start.go:93] Provisioning new machine with config: &{Name:offline-docker-795000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-795000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:40:07.207235    3999 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:40:07.211659    3999 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 20:40:07.226973    3999 start.go:159] libmachine.API.Create for "offline-docker-795000" (driver="qemu2")
	I1003 20:40:07.227011    3999 client.go:168] LocalClient.Create starting
	I1003 20:40:07.227104    3999 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:40:07.227144    3999 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:07.227155    3999 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:07.227199    3999 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:40:07.227231    3999 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:07.227238    3999 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:07.227657    3999 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:40:07.358311    3999 main.go:141] libmachine: Creating SSH key...
	I1003 20:40:07.557561    3999 main.go:141] libmachine: Creating Disk image...
	I1003 20:40:07.557571    3999 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:40:07.557789    3999 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2
	I1003 20:40:07.568655    3999 main.go:141] libmachine: STDOUT: 
	I1003 20:40:07.568681    3999 main.go:141] libmachine: STDERR: 
	I1003 20:40:07.568765    3999 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2 +20000M
	I1003 20:40:07.578229    3999 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:40:07.578249    3999 main.go:141] libmachine: STDERR: 
	I1003 20:40:07.578279    3999 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2
	I1003 20:40:07.578284    3999 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:40:07.578297    3999 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:40:07.578325    3999 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:1a:be:d5:f1:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2
	I1003 20:40:07.580351    3999 main.go:141] libmachine: STDOUT: 
	I1003 20:40:07.580365    3999 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:40:07.580383    3999 client.go:171] duration metric: took 353.367708ms to LocalClient.Create
	I1003 20:40:09.582463    3999 start.go:128] duration metric: took 2.375220417s to createHost
	I1003 20:40:09.582484    3999 start.go:83] releasing machines lock for "offline-docker-795000", held for 2.375285667s
	W1003 20:40:09.582505    3999 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:09.592481    3999 out.go:177] * Deleting "offline-docker-795000" in qemu2 ...
	W1003 20:40:09.600604    3999 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:09.600617    3999 start.go:729] Will try again in 5 seconds ...
	I1003 20:40:14.602889    3999 start.go:360] acquireMachinesLock for offline-docker-795000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:40:14.603567    3999 start.go:364] duration metric: took 537.042µs to acquireMachinesLock for "offline-docker-795000"
	I1003 20:40:14.603746    3999 start.go:93] Provisioning new machine with config: &{Name:offline-docker-795000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-795000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:40:14.604168    3999 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:40:14.609843    3999 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 20:40:14.659328    3999 start.go:159] libmachine.API.Create for "offline-docker-795000" (driver="qemu2")
	I1003 20:40:14.659385    3999 client.go:168] LocalClient.Create starting
	I1003 20:40:14.659521    3999 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:40:14.659596    3999 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:14.659614    3999 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:14.659697    3999 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:40:14.659753    3999 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:14.659772    3999 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:14.660344    3999 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:40:14.800212    3999 main.go:141] libmachine: Creating SSH key...
	I1003 20:40:14.875830    3999 main.go:141] libmachine: Creating Disk image...
	I1003 20:40:14.875836    3999 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:40:14.876043    3999 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2
	I1003 20:40:14.886138    3999 main.go:141] libmachine: STDOUT: 
	I1003 20:40:14.886154    3999 main.go:141] libmachine: STDERR: 
	I1003 20:40:14.886213    3999 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2 +20000M
	I1003 20:40:14.894565    3999 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:40:14.894584    3999 main.go:141] libmachine: STDERR: 
	I1003 20:40:14.894599    3999 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2
	I1003 20:40:14.894603    3999 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:40:14.894614    3999 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:40:14.894656    3999 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:09:35:a7:45:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/offline-docker-795000/disk.qcow2
	I1003 20:40:14.896453    3999 main.go:141] libmachine: STDOUT: 
	I1003 20:40:14.896471    3999 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:40:14.896485    3999 client.go:171] duration metric: took 237.093917ms to LocalClient.Create
	I1003 20:40:16.898654    3999 start.go:128] duration metric: took 2.294458125s to createHost
	I1003 20:40:16.898717    3999 start.go:83] releasing machines lock for "offline-docker-795000", held for 2.295100542s
	W1003 20:40:16.899166    3999 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-795000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-795000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:16.913023    3999 out.go:201] 
	W1003 20:40:16.917988    3999 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:40:16.918016    3999 out.go:270] * 
	* 
	W1003 20:40:16.920163    3999 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:40:16.930864    3999 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-795000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-03 20:40:16.944961 -0700 PDT m=+3169.664483043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-795000 -n offline-docker-795000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-795000 -n offline-docker-795000: exit status 7 (65.449958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-795000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-795000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-795000
--- FAIL: TestOffline (10.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (480.3s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:615: (dbg) Run:  kubectl --context addons-814000 create -f testdata/busybox.yaml
addons_test.go:622: (dbg) Run:  kubectl --context addons-814000 create sa gcp-auth-test
addons_test.go:628: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7f989816-0c97-497d-b042-3e46950da4be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
addons_test.go:628: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:628: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-814000 -n addons-814000
addons_test.go:628: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-03 20:01:17.613957 -0700 PDT m=+830.390898709
addons_test.go:628: (dbg) Run:  kubectl --context addons-814000 describe po busybox -n default
addons_test.go:628: (dbg) kubectl --context addons-814000 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-814000/192.168.105.2
Start Time:       Thu, 03 Oct 2024 19:53:17 -0700
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59lvq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-59lvq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/busybox to addons-814000
Normal   Pulling    6m26s (x4 over 8m)      kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m25s (x4 over 7m58s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
Warning  Failed     6m25s (x4 over 7m58s)   kubelet            Error: ErrImagePull
Warning  Failed     6m11s (x6 over 7m58s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m58s (x20 over 7m58s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:628: (dbg) Run:  kubectl --context addons-814000 logs busybox -n default
addons_test.go:628: (dbg) Non-zero exit: kubectl --context addons-814000 logs busybox -n default: exit status 1 (43.364167ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:628: kubectl --context addons-814000 logs busybox -n default: exit status 1
addons_test.go:630: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.30s)

                                                
                                    
x
+
TestCertOptions (10s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-725000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
E1003 20:40:41.637347    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-725000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.736929708s)

                                                
                                                
-- stdout --
	* [cert-options-725000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-725000" primary control-plane node in "cert-options-725000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-725000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-725000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-725000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-725000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.826ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-725000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-725000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-725000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-725000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-725000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-725000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.5375ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-725000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-725000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-725000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-725000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-725000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-03 20:40:47.995031 -0700 PDT m=+3200.714550793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-725000 -n cert-options-725000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-725000 -n cert-options-725000: exit status 7 (31.839334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-725000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-725000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-725000
--- FAIL: TestCertOptions (10.00s)

                                                
                                    
x
+
TestCertExpiration (195.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-224000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-224000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.978531209s)

                                                
                                                
-- stdout --
	* [cert-expiration-224000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-224000" primary control-plane node in "cert-expiration-224000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-224000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-224000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-224000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-224000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-224000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.201067459s)

                                                
                                                
-- stdout --
	* [cert-expiration-224000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-224000" primary control-plane node in "cert-expiration-224000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-224000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-224000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-224000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-224000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-224000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-224000" primary control-plane node in "cert-expiration-224000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-224000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-224000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-224000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-03 20:43:48.113571 -0700 PDT m=+3380.833073043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-224000 -n cert-expiration-224000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-224000 -n cert-expiration-224000: exit status 7 (64.817625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-224000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-224000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-224000
--- FAIL: TestCertExpiration (195.32s)

                                                
                                    
x
+
TestDockerFlags (10.33s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-166000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-166000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.086508666s)

                                                
                                                
-- stdout --
	* [docker-flags-166000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-166000" primary control-plane node in "docker-flags-166000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-166000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:40:27.798929    4187 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:40:27.799099    4187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:40:27.799102    4187 out.go:358] Setting ErrFile to fd 2...
	I1003 20:40:27.799105    4187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:40:27.799250    4187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:40:27.800707    4187 out.go:352] Setting JSON to false
	I1003 20:40:27.818706    4187 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4198,"bootTime":1728009029,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:40:27.818771    4187 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:40:27.824321    4187 out.go:177] * [docker-flags-166000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:40:27.832277    4187 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:40:27.832325    4187 notify.go:220] Checking for updates...
	I1003 20:40:27.839260    4187 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:40:27.842343    4187 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:40:27.845246    4187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:40:27.848296    4187 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:40:27.851275    4187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:40:27.854665    4187 config.go:182] Loaded profile config "force-systemd-flag-191000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:40:27.854741    4187 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:40:27.854787    4187 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:40:27.859262    4187 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:40:27.866216    4187 start.go:297] selected driver: qemu2
	I1003 20:40:27.866223    4187 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:40:27.866235    4187 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:40:27.868851    4187 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:40:27.872268    4187 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:40:27.875365    4187 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1003 20:40:27.875383    4187 cni.go:84] Creating CNI manager for ""
	I1003 20:40:27.875408    4187 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:40:27.875413    4187 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:40:27.875442    4187 start.go:340] cluster config:
	{Name:docker-flags-166000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:40:27.880468    4187 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:40:27.884271    4187 out.go:177] * Starting "docker-flags-166000" primary control-plane node in "docker-flags-166000" cluster
	I1003 20:40:27.892187    4187 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:40:27.892205    4187 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:40:27.892216    4187 cache.go:56] Caching tarball of preloaded images
	I1003 20:40:27.892314    4187 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:40:27.892321    4187 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:40:27.892397    4187 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/docker-flags-166000/config.json ...
	I1003 20:40:27.892408    4187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/docker-flags-166000/config.json: {Name:mk82e291cf4807152145d22d2db678b72059ae04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:40:27.892715    4187 start.go:360] acquireMachinesLock for docker-flags-166000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:40:27.892774    4187 start.go:364] duration metric: took 47.333µs to acquireMachinesLock for "docker-flags-166000"
	I1003 20:40:27.892786    4187 start.go:93] Provisioning new machine with config: &{Name:docker-flags-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKe
y: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:40:27.892813    4187 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:40:27.904258    4187 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 20:40:27.923154    4187 start.go:159] libmachine.API.Create for "docker-flags-166000" (driver="qemu2")
	I1003 20:40:27.923185    4187 client.go:168] LocalClient.Create starting
	I1003 20:40:27.923279    4187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:40:27.923323    4187 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:27.923335    4187 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:27.923386    4187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:40:27.923422    4187 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:27.923432    4187 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:27.923872    4187 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:40:28.051115    4187 main.go:141] libmachine: Creating SSH key...
	I1003 20:40:28.141773    4187 main.go:141] libmachine: Creating Disk image...
	I1003 20:40:28.141783    4187 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:40:28.141973    4187 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2
	I1003 20:40:28.151832    4187 main.go:141] libmachine: STDOUT: 
	I1003 20:40:28.151850    4187 main.go:141] libmachine: STDERR: 
	I1003 20:40:28.151918    4187 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2 +20000M
	I1003 20:40:28.160325    4187 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:40:28.160338    4187 main.go:141] libmachine: STDERR: 
	I1003 20:40:28.160356    4187 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2
	I1003 20:40:28.160361    4187 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:40:28.160375    4187 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:40:28.160400    4187 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:37:7c:56:d0:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2
	I1003 20:40:28.162210    4187 main.go:141] libmachine: STDOUT: 
	I1003 20:40:28.162228    4187 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:40:28.162246    4187 client.go:171] duration metric: took 239.050125ms to LocalClient.Create
	I1003 20:40:30.164442    4187 start.go:128] duration metric: took 2.271607875s to createHost
	I1003 20:40:30.164514    4187 start.go:83] releasing machines lock for "docker-flags-166000", held for 2.271729459s
	W1003 20:40:30.164573    4187 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:30.177753    4187 out.go:177] * Deleting "docker-flags-166000" in qemu2 ...
	W1003 20:40:30.199474    4187 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:30.199507    4187 start.go:729] Will try again in 5 seconds ...
	I1003 20:40:35.201672    4187 start.go:360] acquireMachinesLock for docker-flags-166000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:40:35.442981    4187 start.go:364] duration metric: took 241.227333ms to acquireMachinesLock for "docker-flags-166000"
	I1003 20:40:35.443126    4187 start.go:93] Provisioning new machine with config: &{Name:docker-flags-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKe
y: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:40:35.443386    4187 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:40:35.452912    4187 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 20:40:35.500448    4187 start.go:159] libmachine.API.Create for "docker-flags-166000" (driver="qemu2")
	I1003 20:40:35.500509    4187 client.go:168] LocalClient.Create starting
	I1003 20:40:35.500667    4187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:40:35.500742    4187 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:35.500762    4187 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:35.500836    4187 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:40:35.500902    4187 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:35.500915    4187 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:35.501538    4187 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:40:35.652346    4187 main.go:141] libmachine: Creating SSH key...
	I1003 20:40:35.792755    4187 main.go:141] libmachine: Creating Disk image...
	I1003 20:40:35.792764    4187 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:40:35.792989    4187 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2
	I1003 20:40:35.803252    4187 main.go:141] libmachine: STDOUT: 
	I1003 20:40:35.803277    4187 main.go:141] libmachine: STDERR: 
	I1003 20:40:35.803331    4187 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2 +20000M
	I1003 20:40:35.811844    4187 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:40:35.811858    4187 main.go:141] libmachine: STDERR: 
	I1003 20:40:35.811869    4187 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2
	I1003 20:40:35.811875    4187 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:40:35.811884    4187 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:40:35.811909    4187 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:cf:77:73:17:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/docker-flags-166000/disk.qcow2
	I1003 20:40:35.813747    4187 main.go:141] libmachine: STDOUT: 
	I1003 20:40:35.813761    4187 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:40:35.813780    4187 client.go:171] duration metric: took 313.265125ms to LocalClient.Create
	I1003 20:40:37.815943    4187 start.go:128] duration metric: took 2.372531417s to createHost
	I1003 20:40:37.815997    4187 start.go:83] releasing machines lock for "docker-flags-166000", held for 2.37298075s
	W1003 20:40:37.816325    4187 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-166000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-166000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:37.825973    4187 out.go:201] 
	W1003 20:40:37.830881    4187 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:40:37.830904    4187 out.go:270] * 
	* 
	W1003 20:40:37.833637    4187 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:40:37.841759    4187 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-166000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-166000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-166000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (85.733917ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-166000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-166000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-166000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-166000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-166000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-166000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-166000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-166000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-166000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.750375ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-166000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-166000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-166000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-166000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-166000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-166000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-03 20:40:37.990763 -0700 PDT m=+3190.710283251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-166000 -n docker-flags-166000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-166000 -n docker-flags-166000: exit status 7 (31.430959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-166000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-166000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-166000
--- FAIL: TestDockerFlags (10.33s)

                                                
                                    
x
+
TestForceSystemdFlag (10.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-191000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-191000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.880156584s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-191000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-191000" primary control-plane node in "force-systemd-flag-191000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-191000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:40:22.884064    4166 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:40:22.884213    4166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:40:22.884216    4166 out.go:358] Setting ErrFile to fd 2...
	I1003 20:40:22.884220    4166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:40:22.884361    4166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:40:22.885531    4166 out.go:352] Setting JSON to false
	I1003 20:40:22.903085    4166 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4193,"bootTime":1728009029,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:40:22.903147    4166 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:40:22.908562    4166 out.go:177] * [force-systemd-flag-191000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:40:22.917449    4166 notify.go:220] Checking for updates...
	I1003 20:40:22.921552    4166 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:40:22.924427    4166 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:40:22.928431    4166 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:40:22.932465    4166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:40:22.935424    4166 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:40:22.938455    4166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:40:22.941854    4166 config.go:182] Loaded profile config "force-systemd-env-492000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:40:22.941942    4166 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:40:22.941995    4166 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:40:22.954005    4166 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:40:22.961480    4166 start.go:297] selected driver: qemu2
	I1003 20:40:22.961487    4166 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:40:22.961493    4166 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:40:22.964321    4166 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:40:22.967385    4166 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:40:22.970475    4166 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 20:40:22.970492    4166 cni.go:84] Creating CNI manager for ""
	I1003 20:40:22.970528    4166 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:40:22.970534    4166 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:40:22.970568    4166 start.go:340] cluster config:
	{Name:force-systemd-flag-191000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-191000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:40:22.976005    4166 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:40:22.983442    4166 out.go:177] * Starting "force-systemd-flag-191000" primary control-plane node in "force-systemd-flag-191000" cluster
	I1003 20:40:22.987429    4166 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:40:22.987453    4166 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:40:22.987462    4166 cache.go:56] Caching tarball of preloaded images
	I1003 20:40:22.987559    4166 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:40:22.987565    4166 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:40:22.987642    4166 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/force-systemd-flag-191000/config.json ...
	I1003 20:40:22.987660    4166 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/force-systemd-flag-191000/config.json: {Name:mk44da138a6807826f00ebe7f9ccaf325ca43a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:40:22.987933    4166 start.go:360] acquireMachinesLock for force-systemd-flag-191000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:40:22.987987    4166 start.go:364] duration metric: took 45.042µs to acquireMachinesLock for "force-systemd-flag-191000"
	I1003 20:40:22.987999    4166 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-191000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-191000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:40:22.988036    4166 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:40:22.996412    4166 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 20:40:23.014882    4166 start.go:159] libmachine.API.Create for "force-systemd-flag-191000" (driver="qemu2")
	I1003 20:40:23.014916    4166 client.go:168] LocalClient.Create starting
	I1003 20:40:23.014988    4166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:40:23.015030    4166 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:23.015045    4166 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:23.015093    4166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:40:23.015126    4166 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:23.015137    4166 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:23.015571    4166 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:40:23.139807    4166 main.go:141] libmachine: Creating SSH key...
	I1003 20:40:23.230479    4166 main.go:141] libmachine: Creating Disk image...
	I1003 20:40:23.230485    4166 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:40:23.230673    4166 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2
	I1003 20:40:23.240380    4166 main.go:141] libmachine: STDOUT: 
	I1003 20:40:23.240402    4166 main.go:141] libmachine: STDERR: 
	I1003 20:40:23.240457    4166 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2 +20000M
	I1003 20:40:23.248937    4166 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:40:23.248951    4166 main.go:141] libmachine: STDERR: 
	I1003 20:40:23.248972    4166 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2
	I1003 20:40:23.248978    4166 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:40:23.248994    4166 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:40:23.249019    4166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:2b:7c:9f:b1:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2
	I1003 20:40:23.250762    4166 main.go:141] libmachine: STDOUT: 
	I1003 20:40:23.250775    4166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:40:23.250796    4166 client.go:171] duration metric: took 235.8745ms to LocalClient.Create
	I1003 20:40:25.253087    4166 start.go:128] duration metric: took 2.265030584s to createHost
	I1003 20:40:25.253145    4166 start.go:83] releasing machines lock for "force-systemd-flag-191000", held for 2.265147958s
	W1003 20:40:25.253201    4166 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:25.267166    4166 out.go:177] * Deleting "force-systemd-flag-191000" in qemu2 ...
	W1003 20:40:25.284935    4166 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:25.284954    4166 start.go:729] Will try again in 5 seconds ...
	I1003 20:40:30.287178    4166 start.go:360] acquireMachinesLock for force-systemd-flag-191000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:40:30.287626    4166 start.go:364] duration metric: took 358.333µs to acquireMachinesLock for "force-systemd-flag-191000"
	I1003 20:40:30.287734    4166 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-191000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-191000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:40:30.287952    4166 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:40:30.291324    4166 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 20:40:30.339888    4166 start.go:159] libmachine.API.Create for "force-systemd-flag-191000" (driver="qemu2")
	I1003 20:40:30.339933    4166 client.go:168] LocalClient.Create starting
	I1003 20:40:30.340053    4166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:40:30.340131    4166 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:30.340211    4166 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:30.340279    4166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:40:30.340337    4166 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:30.340351    4166 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:30.340874    4166 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:40:30.493125    4166 main.go:141] libmachine: Creating SSH key...
	I1003 20:40:30.666974    4166 main.go:141] libmachine: Creating Disk image...
	I1003 20:40:30.666986    4166 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:40:30.667236    4166 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2
	I1003 20:40:30.677679    4166 main.go:141] libmachine: STDOUT: 
	I1003 20:40:30.677695    4166 main.go:141] libmachine: STDERR: 
	I1003 20:40:30.677758    4166 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2 +20000M
	I1003 20:40:30.686153    4166 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:40:30.686170    4166 main.go:141] libmachine: STDERR: 
	I1003 20:40:30.686183    4166 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2
	I1003 20:40:30.686190    4166 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:40:30.686203    4166 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:40:30.686245    4166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:09:7d:e4:98:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-flag-191000/disk.qcow2
	I1003 20:40:30.688015    4166 main.go:141] libmachine: STDOUT: 
	I1003 20:40:30.688029    4166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:40:30.688041    4166 client.go:171] duration metric: took 348.102292ms to LocalClient.Create
	I1003 20:40:32.690211    4166 start.go:128] duration metric: took 2.402221875s to createHost
	I1003 20:40:32.690344    4166 start.go:83] releasing machines lock for "force-systemd-flag-191000", held for 2.402689459s
	W1003 20:40:32.690710    4166 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-191000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-191000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:32.704643    4166 out.go:201] 
	W1003 20:40:32.708420    4166 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:40:32.708450    4166 out.go:270] * 
	* 
	W1003 20:40:32.711075    4166 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:40:32.720250    4166 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-191000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-191000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-191000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.495792ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-191000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-191000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-191000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-03 20:40:32.81935 -0700 PDT m=+3185.538871043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-191000 -n force-systemd-flag-191000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-191000 -n force-systemd-flag-191000: exit status 7 (37.314125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-191000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-191000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-191000
--- FAIL: TestForceSystemdFlag (10.07s)

                                                
                                    
x
+
TestForceSystemdEnv (10.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-492000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1003 20:40:17.600871    1556 install.go:79] stdout: 
W1003 20:40:17.600991    1556 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1003 20:40:17.601010    1556 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/001/docker-machine-driver-hyperkit]
I1003 20:40:17.613642    1556 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/001/docker-machine-driver-hyperkit]
I1003 20:40:17.624543    1556 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/001/docker-machine-driver-hyperkit]
I1003 20:40:17.636008    1556 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/001/docker-machine-driver-hyperkit]
I1003 20:40:17.657481    1556 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1003 20:40:17.657591    1556 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1003 20:40:19.505598    1556 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1003 20:40:19.505618    1556 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1003 20:40:19.505671    1556 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1003 20:40:19.505702    1556 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/002/docker-machine-driver-hyperkit
I1003 20:40:19.918014    1556 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1040f6d40 0x1040f6d40 0x1040f6d40 0x1040f6d40 0x1040f6d40 0x1040f6d40 0x1040f6d40] Decompressors:map[bz2:0x14000687a50 gz:0x14000687a58 tar:0x14000687a00 tar.bz2:0x14000687a10 tar.gz:0x14000687a20 tar.xz:0x14000687a30 tar.zst:0x14000687a40 tbz2:0x14000687a10 tgz:0x14000687a20 txz:0x14000687a30 tzst:0x14000687a40 xz:0x14000687a60 zip:0x14000687a70 zst:0x14000687a68] Getters:map[file:0x1400147d650 http:0x140001166e0 https:0x14000116870] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1003 20:40:19.918122    1556 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/002/docker-machine-driver-hyperkit
I1003 20:40:22.801172    1556 install.go:79] stdout: 
W1003 20:40:22.801375    1556 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1003 20:40:22.801403    1556 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/002/docker-machine-driver-hyperkit]
I1003 20:40:22.818444    1556 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/002/docker-machine-driver-hyperkit]
I1003 20:40:22.832219    1556 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/002/docker-machine-driver-hyperkit]
I1003 20:40:22.843112    1556 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-492000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.48047875s)

                                                
                                                
-- stdout --
	* [force-systemd-env-492000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-492000" primary control-plane node in "force-systemd-env-492000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-492000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:40:17.118908    4134 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:40:17.119041    4134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:40:17.119044    4134 out.go:358] Setting ErrFile to fd 2...
	I1003 20:40:17.119047    4134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:40:17.119172    4134 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:40:17.120371    4134 out.go:352] Setting JSON to false
	I1003 20:40:17.139988    4134 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4188,"bootTime":1728009029,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:40:17.140051    4134 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:40:17.144923    4134 out.go:177] * [force-systemd-env-492000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:40:17.152795    4134 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:40:17.152859    4134 notify.go:220] Checking for updates...
	I1003 20:40:17.159803    4134 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:40:17.162754    4134 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:40:17.165784    4134 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:40:17.168711    4134 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:40:17.171724    4134 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1003 20:40:17.175105    4134 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:40:17.175165    4134 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:40:17.178698    4134 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:40:17.185741    4134 start.go:297] selected driver: qemu2
	I1003 20:40:17.185748    4134 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:40:17.185754    4134 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:40:17.188292    4134 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:40:17.192704    4134 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:40:17.196836    4134 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 20:40:17.196850    4134 cni.go:84] Creating CNI manager for ""
	I1003 20:40:17.196878    4134 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:40:17.196885    4134 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:40:17.196932    4134 start.go:340] cluster config:
	{Name:force-systemd-env-492000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:40:17.201595    4134 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:40:17.209732    4134 out.go:177] * Starting "force-systemd-env-492000" primary control-plane node in "force-systemd-env-492000" cluster
	I1003 20:40:17.213759    4134 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:40:17.213775    4134 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:40:17.213785    4134 cache.go:56] Caching tarball of preloaded images
	I1003 20:40:17.213860    4134 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:40:17.213866    4134 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:40:17.213942    4134 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/force-systemd-env-492000/config.json ...
	I1003 20:40:17.213953    4134 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/force-systemd-env-492000/config.json: {Name:mkadadbdd5d1c672852500271fbef64b3b98b637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:40:17.214172    4134 start.go:360] acquireMachinesLock for force-systemd-env-492000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:40:17.214221    4134 start.go:364] duration metric: took 39.417µs to acquireMachinesLock for "force-systemd-env-492000"
	I1003 20:40:17.214232    4134 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:40:17.214259    4134 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:40:17.224753    4134 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 20:40:17.240752    4134 start.go:159] libmachine.API.Create for "force-systemd-env-492000" (driver="qemu2")
	I1003 20:40:17.240782    4134 client.go:168] LocalClient.Create starting
	I1003 20:40:17.240851    4134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:40:17.240887    4134 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:17.240899    4134 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:17.240939    4134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:40:17.240969    4134 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:17.240980    4134 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:17.241300    4134 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:40:17.370893    4134 main.go:141] libmachine: Creating SSH key...
	I1003 20:40:17.456225    4134 main.go:141] libmachine: Creating Disk image...
	I1003 20:40:17.456235    4134 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:40:17.456460    4134 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2
	I1003 20:40:17.466088    4134 main.go:141] libmachine: STDOUT: 
	I1003 20:40:17.466105    4134 main.go:141] libmachine: STDERR: 
	I1003 20:40:17.466164    4134 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2 +20000M
	I1003 20:40:17.474542    4134 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:40:17.474557    4134 main.go:141] libmachine: STDERR: 
	I1003 20:40:17.474581    4134 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2
	I1003 20:40:17.474587    4134 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:40:17.474601    4134 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:40:17.474627    4134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:17:78:27:3a:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2
	I1003 20:40:17.476397    4134 main.go:141] libmachine: STDOUT: 
	I1003 20:40:17.476410    4134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:40:17.476429    4134 client.go:171] duration metric: took 235.641667ms to LocalClient.Create
	I1003 20:40:19.478557    4134 start.go:128] duration metric: took 2.26429025s to createHost
	I1003 20:40:19.478576    4134 start.go:83] releasing machines lock for "force-systemd-env-492000", held for 2.264351291s
	W1003 20:40:19.478588    4134 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:19.485585    4134 out.go:177] * Deleting "force-systemd-env-492000" in qemu2 ...
	W1003 20:40:19.493484    4134 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:19.493490    4134 start.go:729] Will try again in 5 seconds ...
	I1003 20:40:24.495704    4134 start.go:360] acquireMachinesLock for force-systemd-env-492000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:40:25.253261    4134 start.go:364] duration metric: took 757.416042ms to acquireMachinesLock for "force-systemd-env-492000"
	I1003 20:40:25.253372    4134 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-492000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-492000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:40:25.253641    4134 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:40:25.260242    4134 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1003 20:40:25.305633    4134 start.go:159] libmachine.API.Create for "force-systemd-env-492000" (driver="qemu2")
	I1003 20:40:25.305688    4134 client.go:168] LocalClient.Create starting
	I1003 20:40:25.305851    4134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:40:25.305926    4134 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:25.305946    4134 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:25.306006    4134 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:40:25.306062    4134 main.go:141] libmachine: Decoding PEM data...
	I1003 20:40:25.306076    4134 main.go:141] libmachine: Parsing certificate...
	I1003 20:40:25.306738    4134 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:40:25.458592    4134 main.go:141] libmachine: Creating SSH key...
	I1003 20:40:25.505580    4134 main.go:141] libmachine: Creating Disk image...
	I1003 20:40:25.505585    4134 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:40:25.505791    4134 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2
	I1003 20:40:25.515641    4134 main.go:141] libmachine: STDOUT: 
	I1003 20:40:25.515660    4134 main.go:141] libmachine: STDERR: 
	I1003 20:40:25.515740    4134 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2 +20000M
	I1003 20:40:25.524131    4134 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:40:25.524147    4134 main.go:141] libmachine: STDERR: 
	I1003 20:40:25.524159    4134 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2
	I1003 20:40:25.524172    4134 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:40:25.524182    4134 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:40:25.524209    4134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f8:fc:ea:13:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/force-systemd-env-492000/disk.qcow2
	I1003 20:40:25.526045    4134 main.go:141] libmachine: STDOUT: 
	I1003 20:40:25.526065    4134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:40:25.526080    4134 client.go:171] duration metric: took 220.38575ms to LocalClient.Create
	I1003 20:40:27.528315    4134 start.go:128] duration metric: took 2.274632375s to createHost
	I1003 20:40:27.528384    4134 start.go:83] releasing machines lock for "force-systemd-env-492000", held for 2.27506075s
	W1003 20:40:27.528774    4134 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-492000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-492000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:40:27.539456    4134 out.go:201] 
	W1003 20:40:27.544259    4134 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:40:27.544331    4134 out.go:270] * 
	* 
	W1003 20:40:27.547030    4134 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:40:27.555339    4134 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-492000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-492000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-492000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (86.431ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-492000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-492000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-492000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-03 20:40:27.657872 -0700 PDT m=+3180.377393084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-492000 -n force-systemd-env-492000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-492000 -n force-systemd-env-492000: exit status 7 (36.090042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-492000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-492000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-492000
--- FAIL: TestForceSystemdEnv (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (33.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-063000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-063000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-8d8zr" [58529b9e-0df9-4f5a-a505-2c0164dfcb9b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-8d8zr" [58529b9e-0df9-4f5a-a505-2c0164dfcb9b] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.009688s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31997
functional_test.go:1661: error fetching http://192.168.105.4:31997: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
I1003 20:08:14.580141    1556 retry.go:31] will retry after 1.066485227s: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31997: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
I1003 20:08:15.650391    1556 retry.go:31] will retry after 942.237339ms: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31997: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
I1003 20:08:16.595680    1556 retry.go:31] will retry after 2.640142902s: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31997: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
I1003 20:08:19.239858    1556 retry.go:31] will retry after 1.888927048s: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
E1003 20:08:19.500039    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:31997: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
I1003 20:08:21.132456    1556 retry.go:31] will retry after 2.777744993s: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31997: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
I1003 20:08:23.911934    1556 retry.go:31] will retry after 11.183918923s: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31997: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31997: Get "http://192.168.105.4:31997": dial tcp 192.168.105.4:31997: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-063000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-8d8zr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-063000/192.168.105.4
Start Time:       Thu, 03 Oct 2024 20:08:02 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://5e8114ffc318336de54e66f8f817ce9b6c7bd27670dd436d2c6f2b6a11b6bbaa
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 03 Oct 2024 20:08:26 -0700
Finished:     Thu, 03 Oct 2024 20:08:26 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 03 Oct 2024 20:08:09 -0700
Finished:     Thu, 03 Oct 2024 20:08:09 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvxqz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kvxqz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  32s               default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-8d8zr to functional-063000
Normal   Pulling    33s               kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     27s               kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 5.805s (5.805s including waiting). Image size: 84957542 bytes.
Normal   Created    9s (x3 over 27s)  kubelet            Created container echoserver-arm
Normal   Started    9s (x3 over 27s)  kubelet            Started container echoserver-arm
Normal   Pulled     9s (x2 over 26s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    9s (x3 over 25s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-8d8zr_default(58529b9e-0df9-4f5a-a505-2c0164dfcb9b)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-063000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-063000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.55.202
IPs:                      10.111.55.202
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31997/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-063000 -n functional-063000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| cp      | functional-063000 cp functional-063000:/home/docker/cp-test.txt                                                 | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:07 PDT | 03 Oct 24 20:07 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd939100816/001/cp-test.txt           |                   |         |         |                     |                     |
	| config  | functional-063000 config unset                                                                                  | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:07 PDT | 03 Oct 24 20:07 PDT |
	|         | cpus                                                                                                            |                   |         |         |                     |                     |
	| config  | functional-063000 config get                                                                                    | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:07 PDT |                     |
	|         | cpus                                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-063000 ssh -n                                                                                        | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:07 PDT | 03 Oct 24 20:07 PDT |
	|         | functional-063000 sudo cat                                                                                      |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-063000 ssh echo                                                                                      | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:07 PDT | 03 Oct 24 20:07 PDT |
	|         | hello                                                                                                           |                   |         |         |                     |                     |
	| cp      | functional-063000 cp                                                                                            | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:07 PDT | 03 Oct 24 20:07 PDT |
	|         | testdata/cp-test.txt                                                                                            |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-063000 ssh cat                                                                                       | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:07 PDT | 03 Oct 24 20:07 PDT |
	|         | /etc/hostname                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-063000 ssh -n                                                                                        | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:07 PDT | 03 Oct 24 20:07 PDT |
	|         | functional-063000 sudo cat                                                                                      |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                                 |                   |         |         |                     |                     |
	| tunnel  | functional-063000 tunnel                                                                                        | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:07 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-063000 tunnel                                                                                        | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:07 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-063000 tunnel                                                                                        | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:07 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| addons  | functional-063000 addons list                                                                                   | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT | 03 Oct 24 20:08 PDT |
	| addons  | functional-063000 addons list                                                                                   | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT | 03 Oct 24 20:08 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-063000 service                                                                                       | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT | 03 Oct 24 20:08 PDT |
	|         | hello-node-connect --url                                                                                        |                   |         |         |                     |                     |
	| service | functional-063000 service list                                                                                  | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT | 03 Oct 24 20:08 PDT |
	| service | functional-063000 service list                                                                                  | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT | 03 Oct 24 20:08 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-063000 service                                                                                       | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT | 03 Oct 24 20:08 PDT |
	|         | --namespace=default --https                                                                                     |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                |                   |         |         |                     |                     |
	| service | functional-063000                                                                                               | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT | 03 Oct 24 20:08 PDT |
	|         | service hello-node --url                                                                                        |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                |                   |         |         |                     |                     |
	| service | functional-063000 service                                                                                       | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT | 03 Oct 24 20:08 PDT |
	|         | hello-node --url                                                                                                |                   |         |         |                     |                     |
	| ssh     | functional-063000 ssh findmnt                                                                                   | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| mount   | -p functional-063000                                                                                            | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1348374540/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-063000 ssh findmnt                                                                                   | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-063000 ssh findmnt                                                                                   | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT | 03 Oct 24 20:08 PDT |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-063000 ssh -- ls                                                                                     | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT | 03 Oct 24 20:08 PDT |
	|         | -la /mount-9p                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-063000 ssh cat                                                                                       | functional-063000 | jenkins | v1.34.0 | 03 Oct 24 20:08 PDT | 03 Oct 24 20:08 PDT |
	|         | /mount-9p/test-1728011305020592000                                                                              |                   |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:06:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:06:43.008757    2411 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:06:43.008903    2411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:06:43.008905    2411 out.go:358] Setting ErrFile to fd 2...
	I1003 20:06:43.008907    2411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:06:43.009049    2411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:06:43.010185    2411 out.go:352] Setting JSON to false
	I1003 20:06:43.028684    2411 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2174,"bootTime":1728009029,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:06:43.028778    2411 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:06:43.031307    2411 out.go:177] * [functional-063000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:06:43.039722    2411 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:06:43.039797    2411 notify.go:220] Checking for updates...
	I1003 20:06:43.046604    2411 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:06:43.052693    2411 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:06:43.059581    2411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:06:43.072434    2411 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:06:43.075594    2411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:06:43.082870    2411 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:06:43.082922    2411 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:06:43.087598    2411 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:06:43.094591    2411 start.go:297] selected driver: qemu2
	I1003 20:06:43.094594    2411 start.go:901] validating driver "qemu2" against &{Name:functional-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:06:43.094640    2411 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:06:43.096937    2411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:06:43.096956    2411 cni.go:84] Creating CNI manager for ""
	I1003 20:06:43.096979    2411 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:06:43.097021    2411 start.go:340] cluster config:
	{Name:functional-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-063000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:06:43.101015    2411 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:06:43.107659    2411 out.go:177] * Starting "functional-063000" primary control-plane node in "functional-063000" cluster
	I1003 20:06:43.111616    2411 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:06:43.111629    2411 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:06:43.111637    2411 cache.go:56] Caching tarball of preloaded images
	I1003 20:06:43.111711    2411 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:06:43.111722    2411 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:06:43.111790    2411 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/config.json ...
	I1003 20:06:43.112185    2411 start.go:360] acquireMachinesLock for functional-063000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:06:43.112227    2411 start.go:364] duration metric: took 37.542µs to acquireMachinesLock for "functional-063000"
	I1003 20:06:43.112233    2411 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:06:43.112236    2411 fix.go:54] fixHost starting: 
	I1003 20:06:43.112780    2411 fix.go:112] recreateIfNeeded on functional-063000: state=Running err=<nil>
	W1003 20:06:43.112786    2411 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:06:43.117688    2411 out.go:177] * Updating the running qemu2 "functional-063000" VM ...
	I1003 20:06:43.125601    2411 machine.go:93] provisionDockerMachine start ...
	I1003 20:06:43.125634    2411 main.go:141] libmachine: Using SSH client type: native
	I1003 20:06:43.125753    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc9c00] 0x102ccc440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1003 20:06:43.125756    2411 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:06:43.173990    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-063000
	
	I1003 20:06:43.173999    2411 buildroot.go:166] provisioning hostname "functional-063000"
	I1003 20:06:43.174041    2411 main.go:141] libmachine: Using SSH client type: native
	I1003 20:06:43.174131    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc9c00] 0x102ccc440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1003 20:06:43.174135    2411 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-063000 && echo "functional-063000" | sudo tee /etc/hostname
	I1003 20:06:43.224122    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-063000
	
	I1003 20:06:43.224178    2411 main.go:141] libmachine: Using SSH client type: native
	I1003 20:06:43.224295    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc9c00] 0x102ccc440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1003 20:06:43.224301    2411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-063000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-063000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-063000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:06:43.270382    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:06:43.270391    2411 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1040/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1040/.minikube}
	I1003 20:06:43.270400    2411 buildroot.go:174] setting up certificates
	I1003 20:06:43.270404    2411 provision.go:84] configureAuth start
	I1003 20:06:43.270409    2411 provision.go:143] copyHostCerts
	I1003 20:06:43.270509    2411 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem, removing ...
	I1003 20:06:43.270513    2411 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem
	I1003 20:06:43.270655    2411 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem (1078 bytes)
	I1003 20:06:43.270865    2411 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem, removing ...
	I1003 20:06:43.270867    2411 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem
	I1003 20:06:43.270917    2411 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem (1123 bytes)
	I1003 20:06:43.271044    2411 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem, removing ...
	I1003 20:06:43.271046    2411 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem
	I1003 20:06:43.271096    2411 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem (1675 bytes)
	I1003 20:06:43.271178    2411 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem org=jenkins.functional-063000 san=[127.0.0.1 192.168.105.4 functional-063000 localhost minikube]
	I1003 20:06:43.379997    2411 provision.go:177] copyRemoteCerts
	I1003 20:06:43.380034    2411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:06:43.380041    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/functional-063000/id_rsa Username:docker}
	I1003 20:06:43.406524    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1003 20:06:43.415172    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 20:06:43.423553    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 20:06:43.431827    2411 provision.go:87] duration metric: took 161.417917ms to configureAuth
	I1003 20:06:43.431833    2411 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:06:43.431940    2411 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:06:43.431980    2411 main.go:141] libmachine: Using SSH client type: native
	I1003 20:06:43.432061    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc9c00] 0x102ccc440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1003 20:06:43.432064    2411 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:06:43.478160    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:06:43.478166    2411 buildroot.go:70] root file system type: tmpfs
	I1003 20:06:43.478214    2411 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:06:43.478298    2411 main.go:141] libmachine: Using SSH client type: native
	I1003 20:06:43.478404    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc9c00] 0x102ccc440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1003 20:06:43.478434    2411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:06:43.529925    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:06:43.529976    2411 main.go:141] libmachine: Using SSH client type: native
	I1003 20:06:43.530074    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc9c00] 0x102ccc440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1003 20:06:43.530080    2411 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:06:43.577820    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:06:43.577826    2411 machine.go:96] duration metric: took 452.225542ms to provisionDockerMachine
	I1003 20:06:43.577830    2411 start.go:293] postStartSetup for "functional-063000" (driver="qemu2")
	I1003 20:06:43.577835    2411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:06:43.577884    2411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:06:43.577911    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/functional-063000/id_rsa Username:docker}
	I1003 20:06:43.605793    2411 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:06:43.607250    2411 info.go:137] Remote host: Buildroot 2023.02.9
	I1003 20:06:43.607254    2411 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1040/.minikube/addons for local assets ...
	I1003 20:06:43.607333    2411 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1040/.minikube/files for local assets ...
	I1003 20:06:43.607470    2411 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem -> 15562.pem in /etc/ssl/certs
	I1003 20:06:43.607611    2411 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/test/nested/copy/1556/hosts -> hosts in /etc/test/nested/copy/1556
	I1003 20:06:43.607659    2411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1556
	I1003 20:06:43.611242    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem --> /etc/ssl/certs/15562.pem (1708 bytes)
	I1003 20:06:43.619612    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/test/nested/copy/1556/hosts --> /etc/test/nested/copy/1556/hosts (40 bytes)
	I1003 20:06:43.628156    2411 start.go:296] duration metric: took 50.321708ms for postStartSetup
	I1003 20:06:43.628167    2411 fix.go:56] duration metric: took 515.934875ms for fixHost
	I1003 20:06:43.628220    2411 main.go:141] libmachine: Using SSH client type: native
	I1003 20:06:43.628323    2411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102cc9c00] 0x102ccc440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1003 20:06:43.628325    2411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:06:43.674760    2411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728011203.757424108
	
	I1003 20:06:43.674765    2411 fix.go:216] guest clock: 1728011203.757424108
	I1003 20:06:43.674769    2411 fix.go:229] Guest: 2024-10-03 20:06:43.757424108 -0700 PDT Remote: 2024-10-03 20:06:43.628168 -0700 PDT m=+0.639082918 (delta=129.256108ms)
	I1003 20:06:43.674782    2411 fix.go:200] guest clock delta is within tolerance: 129.256108ms
	I1003 20:06:43.674784    2411 start.go:83] releasing machines lock for "functional-063000", held for 562.558959ms
	I1003 20:06:43.675090    2411 ssh_runner.go:195] Run: cat /version.json
	I1003 20:06:43.675096    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/functional-063000/id_rsa Username:docker}
	I1003 20:06:43.675126    2411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:06:43.675140    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/functional-063000/id_rsa Username:docker}
	I1003 20:06:43.700663    2411 ssh_runner.go:195] Run: systemctl --version
	I1003 20:06:43.703381    2411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:06:43.746488    2411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:06:43.746529    2411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 20:06:43.750233    2411 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 20:06:43.750238    2411 start.go:495] detecting cgroup driver to use...
	I1003 20:06:43.750309    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:06:43.756657    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1003 20:06:43.760072    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:06:43.763826    2411 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:06:43.763857    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:06:43.767703    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:06:43.771687    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:06:43.775735    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:06:43.779580    2411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:06:43.783648    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:06:43.787738    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:06:43.791599    2411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:06:43.795435    2411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:06:43.799416    2411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:06:43.803370    2411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:06:43.915676    2411 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:06:43.926786    2411 start.go:495] detecting cgroup driver to use...
	I1003 20:06:43.926864    2411 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:06:43.933175    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:06:43.938749    2411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:06:43.946380    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:06:43.952091    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:06:43.957089    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:06:43.963414    2411 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:06:43.965012    2411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:06:43.968186    2411 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1003 20:06:43.974253    2411 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:06:44.083501    2411 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:06:44.184517    2411 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:06:44.184571    2411 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:06:44.191737    2411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:06:44.302522    2411 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:06:56.667447    2411 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.365005541s)
	I1003 20:06:56.667532    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:06:56.673822    2411 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:06:56.682469    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:06:56.688904    2411 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:06:56.787736    2411 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:06:56.874873    2411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:06:56.972663    2411 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:06:56.979497    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:06:56.985278    2411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:06:57.079479    2411 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:06:57.108558    2411 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:06:57.108643    2411 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:06:57.111506    2411 start.go:563] Will wait 60s for crictl version
	I1003 20:06:57.111545    2411 ssh_runner.go:195] Run: which crictl
	I1003 20:06:57.113158    2411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:06:57.125295    2411 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1003 20:06:57.125382    2411 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:06:57.133232    2411 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:06:57.144401    2411 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1003 20:06:57.144560    2411 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1003 20:06:57.150412    2411 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1003 20:06:57.154355    2411 kubeadm.go:883] updating cluster {Name:functional-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.1 ClusterName:functional-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 20:06:57.154424    2411 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:06:57.154472    2411 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:06:57.160376    2411 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-063000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1003 20:06:57.160381    2411 docker.go:615] Images already preloaded, skipping extraction
	I1003 20:06:57.160440    2411 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:06:57.165893    2411 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-063000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1003 20:06:57.165899    2411 cache_images.go:84] Images are preloaded, skipping loading
	I1003 20:06:57.165902    2411 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.1 docker true true} ...
	I1003 20:06:57.165952    2411 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-063000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:06:57.166009    2411 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:06:57.181345    2411 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1003 20:06:57.181354    2411 cni.go:84] Creating CNI manager for ""
	I1003 20:06:57.181367    2411 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:06:57.181371    2411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:06:57.181380    2411 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-063000 NodeName:functional-063000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:06:57.181431    2411 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-063000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:06:57.181502    2411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1003 20:06:57.185368    2411 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:06:57.185404    2411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 20:06:57.188623    2411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1003 20:06:57.194634    2411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:06:57.200368    2411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I1003 20:06:57.206480    2411 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I1003 20:06:57.207833    2411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:06:57.304053    2411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:06:57.309853    2411 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000 for IP: 192.168.105.4
	I1003 20:06:57.309864    2411 certs.go:194] generating shared ca certs ...
	I1003 20:06:57.309871    2411 certs.go:226] acquiring lock for ca certs: {Name:mke7121fb3a343b392a0b01a3f973157c3dad296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:06:57.310039    2411 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.key
	I1003 20:06:57.310115    2411 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.key
	I1003 20:06:57.310120    2411 certs.go:256] generating profile certs ...
	I1003 20:06:57.310177    2411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.key
	I1003 20:06:57.310272    2411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/apiserver.key.be7376e6
	I1003 20:06:57.310329    2411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/proxy-client.key
	I1003 20:06:57.310502    2411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556.pem (1338 bytes)
	W1003 20:06:57.310534    2411 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556_empty.pem, impossibly tiny 0 bytes
	I1003 20:06:57.310539    2411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:06:57.310566    2411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem (1078 bytes)
	I1003 20:06:57.310586    2411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:06:57.310601    2411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem (1675 bytes)
	I1003 20:06:57.310640    2411 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem (1708 bytes)
	I1003 20:06:57.311007    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:06:57.319536    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 20:06:57.328244    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:06:57.336415    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:06:57.344732    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 20:06:57.353407    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 20:06:57.362570    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:06:57.371209    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 20:06:57.379607    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556.pem --> /usr/share/ca-certificates/1556.pem (1338 bytes)
	I1003 20:06:57.387873    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem --> /usr/share/ca-certificates/15562.pem (1708 bytes)
	I1003 20:06:57.396247    2411 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:06:57.404395    2411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:06:57.410312    2411 ssh_runner.go:195] Run: openssl version
	I1003 20:06:57.412644    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:06:57.416405    2411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:06:57.418036    2411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:06:57.418059    2411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:06:57.420191    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:06:57.423544    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1556.pem && ln -fs /usr/share/ca-certificates/1556.pem /etc/ssl/certs/1556.pem"
	I1003 20:06:57.427298    2411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1556.pem
	I1003 20:06:57.429014    2411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:05 /usr/share/ca-certificates/1556.pem
	I1003 20:06:57.429040    2411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1556.pem
	I1003 20:06:57.431217    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1556.pem /etc/ssl/certs/51391683.0"
	I1003 20:06:57.434879    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15562.pem && ln -fs /usr/share/ca-certificates/15562.pem /etc/ssl/certs/15562.pem"
	I1003 20:06:57.439116    2411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15562.pem
	I1003 20:06:57.440850    2411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:05 /usr/share/ca-certificates/15562.pem
	I1003 20:06:57.440872    2411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15562.pem
	I1003 20:06:57.442955    2411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15562.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:06:57.446751    2411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:06:57.448336    2411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 20:06:57.450355    2411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 20:06:57.452549    2411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 20:06:57.454534    2411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 20:06:57.456879    2411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 20:06:57.458902    2411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 20:06:57.461008    2411 kubeadm.go:392] StartCluster: {Name:functional-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:functional-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:06:57.461085    2411 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:06:57.472652    2411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:06:57.476245    2411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1003 20:06:57.476248    2411 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1003 20:06:57.476279    2411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 20:06:57.479617    2411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:06:57.479896    2411 kubeconfig.go:125] found "functional-063000" server: "https://192.168.105.4:8441"
	I1003 20:06:57.480551    2411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 20:06:57.483843    2411 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1003 20:06:57.483849    2411 kubeadm.go:1160] stopping kube-system containers ...
	I1003 20:06:57.483897    2411 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:06:57.497419    2411 docker.go:483] Stopping containers: [e9570513fd78 897032cb75f9 85373af55c4a 6001d4e279cf 31224cb2c413 8a5d3758c4c9 a9d45773c39c 419dd3db943a 3eadbaf64490 cc32810f6b4c d2bbffa512ec 5a1a98916e68 0cfc67f8e128 9819345ee5f3 e86be58c4b37 01f81d3f9ff7 106f7319b525 fad305f12972 61cb822abb42 fa9e70988005 dee43dc97c65 8d9860bf0478 ac73a878548d 9baf21274c08 a62a789f3447 832a4363989c 33f1b5a1ddc5 fc2ea5bfbe80 01723e29ea43 fd2e1e54fdab b9103d4cda55 a623e646fa7a c6f779bb5695 efed883da3ad]
	I1003 20:06:57.497494    2411 ssh_runner.go:195] Run: docker stop e9570513fd78 897032cb75f9 85373af55c4a 6001d4e279cf 31224cb2c413 8a5d3758c4c9 a9d45773c39c 419dd3db943a 3eadbaf64490 cc32810f6b4c d2bbffa512ec 5a1a98916e68 0cfc67f8e128 9819345ee5f3 e86be58c4b37 01f81d3f9ff7 106f7319b525 fad305f12972 61cb822abb42 fa9e70988005 dee43dc97c65 8d9860bf0478 ac73a878548d 9baf21274c08 a62a789f3447 832a4363989c 33f1b5a1ddc5 fc2ea5bfbe80 01723e29ea43 fd2e1e54fdab b9103d4cda55 a623e646fa7a c6f779bb5695 efed883da3ad
	I1003 20:06:57.505585    2411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1003 20:06:57.620364    2411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:06:57.626908    2411 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Oct  4 03:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct  4 03:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct  4 03:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct  4 03:06 /etc/kubernetes/scheduler.conf
	
	I1003 20:06:57.626960    2411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 20:06:57.632420    2411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 20:06:57.637322    2411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 20:06:57.642040    2411 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:06:57.642086    2411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:06:57.646825    2411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 20:06:57.650977    2411 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:06:57.650999    2411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:06:57.654913    2411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:06:57.658816    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:06:57.675630    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:06:58.197026    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:06:58.327490    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:06:58.346276    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:06:58.368893    2411 api_server.go:52] waiting for apiserver process to appear ...
	I1003 20:06:58.368966    2411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:06:58.871408    2411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:06:59.371104    2411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:06:59.377266    2411 api_server.go:72] duration metric: took 1.008380417s to wait for apiserver process to appear ...
	I1003 20:06:59.377275    2411 api_server.go:88] waiting for apiserver healthz status ...
	I1003 20:06:59.377287    2411 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1003 20:07:01.717024    2411 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1003 20:07:01.717033    2411 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1003 20:07:01.717039    2411 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1003 20:07:01.732648    2411 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1003 20:07:01.732662    2411 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1003 20:07:01.879354    2411 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1003 20:07:01.882409    2411 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 20:07:01.882416    2411 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 20:07:02.379375    2411 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1003 20:07:02.389129    2411 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 20:07:02.389150    2411 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 20:07:02.879197    2411 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1003 20:07:02.882694    2411 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1003 20:07:02.886824    2411 api_server.go:141] control plane version: v1.31.1
	I1003 20:07:02.886830    2411 api_server.go:131] duration metric: took 3.509577916s to wait for apiserver health ...
	I1003 20:07:02.886834    2411 cni.go:84] Creating CNI manager for ""
	I1003 20:07:02.886839    2411 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:07:02.891000    2411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1003 20:07:02.894049    2411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1003 20:07:02.898412    2411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1003 20:07:02.904207    2411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 20:07:02.910452    2411 system_pods.go:59] 8 kube-system pods found
	I1003 20:07:02.910460    2411 system_pods.go:61] "coredns-7c65d6cfc9-5wd9p" [2b8ecfa8-8b75-4548-851d-884a41af5c14] Running
	I1003 20:07:02.910462    2411 system_pods.go:61] "coredns-7c65d6cfc9-b6mwq" [055fe939-dbf3-4f60-899f-b49849453c38] Running
	I1003 20:07:02.910464    2411 system_pods.go:61] "etcd-functional-063000" [17c87a89-26cc-46b9-9c41-671f843baa3a] Running
	I1003 20:07:02.910465    2411 system_pods.go:61] "kube-apiserver-functional-063000" [9f28003b-8262-47e7-a7d4-6facacd03e35] Running
	I1003 20:07:02.910466    2411 system_pods.go:61] "kube-controller-manager-functional-063000" [67c76316-c943-4916-8361-d0086afa6624] Running
	I1003 20:07:02.910468    2411 system_pods.go:61] "kube-proxy-tbbkh" [e6457328-a33d-4b47-b8ec-4b2ee58d2539] Running
	I1003 20:07:02.910469    2411 system_pods.go:61] "kube-scheduler-functional-063000" [c997a0bb-656f-4c5b-9e13-2049cefa25d9] Running
	I1003 20:07:02.910470    2411 system_pods.go:61] "storage-provisioner" [b5c72771-c572-46a2-b3cc-f40a4c63d36b] Running
	I1003 20:07:02.910472    2411 system_pods.go:74] duration metric: took 6.262208ms to wait for pod list to return data ...
	I1003 20:07:02.910475    2411 node_conditions.go:102] verifying NodePressure condition ...
	I1003 20:07:02.912786    2411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1003 20:07:02.912793    2411 node_conditions.go:123] node cpu capacity is 2
	I1003 20:07:02.912798    2411 node_conditions.go:105] duration metric: took 2.321ms to run NodePressure ...
	I1003 20:07:02.912804    2411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:07:03.137994    2411 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1003 20:07:03.141925    2411 retry.go:31] will retry after 240.809205ms: kubelet not initialised
	I1003 20:07:03.398050    2411 retry.go:31] will retry after 442.116483ms: kubelet not initialised
	I1003 20:07:03.845188    2411 kubeadm.go:739] kubelet initialised
	I1003 20:07:03.845194    2411 kubeadm.go:740] duration metric: took 707.195542ms waiting for restarted kubelet to initialise ...
	I1003 20:07:03.845198    2411 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 20:07:03.848697    2411 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:05.860931    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:07.864057    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:10.363143    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:12.859768    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:14.859841    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:17.356168    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:19.362819    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:21.364606    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:23.857930    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:25.859132    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:27.862184    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:30.362764    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:32.362820    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:34.863605    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:37.362126    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:39.363178    2411 pod_ready.go:103] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"False"
	I1003 20:07:40.863076    2411 pod_ready.go:93] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:40.863099    2411 pod_ready.go:82] duration metric: took 37.014670292s for pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:40.863116    2411 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-b6mwq" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:40.870627    2411 pod_ready.go:93] pod "coredns-7c65d6cfc9-b6mwq" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:40.870639    2411 pod_ready.go:82] duration metric: took 7.514125ms for pod "coredns-7c65d6cfc9-b6mwq" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:40.870653    2411 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:40.877101    2411 pod_ready.go:93] pod "etcd-functional-063000" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:40.877108    2411 pod_ready.go:82] duration metric: took 6.448458ms for pod "etcd-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:40.877116    2411 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:40.883090    2411 pod_ready.go:93] pod "kube-apiserver-functional-063000" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:40.883094    2411 pod_ready.go:82] duration metric: took 5.974125ms for pod "kube-apiserver-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:40.883101    2411 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:40.887392    2411 pod_ready.go:93] pod "kube-controller-manager-functional-063000" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:40.887397    2411 pod_ready.go:82] duration metric: took 4.292166ms for pod "kube-controller-manager-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:40.887404    2411 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tbbkh" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:41.258843    2411 pod_ready.go:93] pod "kube-proxy-tbbkh" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:41.258866    2411 pod_ready.go:82] duration metric: took 371.456625ms for pod "kube-proxy-tbbkh" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:41.258887    2411 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:41.658854    2411 pod_ready.go:93] pod "kube-scheduler-functional-063000" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:41.658878    2411 pod_ready.go:82] duration metric: took 399.978708ms for pod "kube-scheduler-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:41.658902    2411 pod_ready.go:39] duration metric: took 37.813980333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 20:07:41.658950    2411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:07:41.674917    2411 ops.go:34] apiserver oom_adj: -16
	I1003 20:07:41.674930    2411 kubeadm.go:597] duration metric: took 44.199009667s to restartPrimaryControlPlane
	I1003 20:07:41.674940    2411 kubeadm.go:394] duration metric: took 44.21427025s to StartCluster
	I1003 20:07:41.674969    2411 settings.go:142] acquiring lock: {Name:mkcb41cafeed9afeb88d9d6f184696173f92f60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:07:41.675312    2411 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:07:41.676430    2411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/kubeconfig: {Name:mk3ee3e45466495ab1092989494e731c3b1eb95d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:07:41.677252    2411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:07:41.677268    2411 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:07:41.677387    2411 addons.go:69] Setting default-storageclass=true in profile "functional-063000"
	I1003 20:07:41.677422    2411 addons.go:69] Setting storage-provisioner=true in profile "functional-063000"
	I1003 20:07:41.677441    2411 addons.go:234] Setting addon storage-provisioner=true in "functional-063000"
	W1003 20:07:41.677448    2411 addons.go:243] addon storage-provisioner should already be in state true
	I1003 20:07:41.677446    2411 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:07:41.677475    2411 host.go:66] Checking if "functional-063000" exists ...
	I1003 20:07:41.677483    2411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-063000"
	I1003 20:07:41.680189    2411 addons.go:234] Setting addon default-storageclass=true in "functional-063000"
	W1003 20:07:41.680199    2411 addons.go:243] addon default-storageclass should already be in state true
	I1003 20:07:41.680229    2411 host.go:66] Checking if "functional-063000" exists ...
	I1003 20:07:41.681310    2411 out.go:177] * Verifying Kubernetes components...
	I1003 20:07:41.686071    2411 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:07:41.686082    2411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:07:41.686106    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/functional-063000/id_rsa Username:docker}
	I1003 20:07:41.689103    2411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:07:41.689217    2411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:07:41.693347    2411 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:07:41.693353    2411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:07:41.693362    2411 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/functional-063000/id_rsa Username:docker}
	I1003 20:07:41.833908    2411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:07:41.840172    2411 node_ready.go:35] waiting up to 6m0s for node "functional-063000" to be "Ready" ...
	I1003 20:07:41.854229    2411 node_ready.go:49] node "functional-063000" has status "Ready":"True"
	I1003 20:07:41.854237    2411 node_ready.go:38] duration metric: took 14.0545ms for node "functional-063000" to be "Ready" ...
	I1003 20:07:41.854240    2411 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 20:07:41.957515    2411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:07:41.958909    2411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:07:42.056095    2411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:42.263678    2411 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1003 20:07:42.270649    2411 addons.go:510] duration metric: took 593.389292ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1003 20:07:42.454509    2411 pod_ready.go:93] pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:42.454516    2411 pod_ready.go:82] duration metric: took 398.417625ms for pod "coredns-7c65d6cfc9-5wd9p" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:42.454522    2411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-b6mwq" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:42.855463    2411 pod_ready.go:93] pod "coredns-7c65d6cfc9-b6mwq" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:42.855480    2411 pod_ready.go:82] duration metric: took 400.956375ms for pod "coredns-7c65d6cfc9-b6mwq" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:42.855494    2411 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:43.259177    2411 pod_ready.go:93] pod "etcd-functional-063000" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:43.259209    2411 pod_ready.go:82] duration metric: took 403.704083ms for pod "etcd-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:43.259226    2411 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:43.658817    2411 pod_ready.go:93] pod "kube-apiserver-functional-063000" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:43.658842    2411 pod_ready.go:82] duration metric: took 399.603958ms for pod "kube-apiserver-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:43.658863    2411 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:44.055213    2411 pod_ready.go:93] pod "kube-controller-manager-functional-063000" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:44.055223    2411 pod_ready.go:82] duration metric: took 396.354291ms for pod "kube-controller-manager-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:44.055230    2411 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tbbkh" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:44.457855    2411 pod_ready.go:93] pod "kube-proxy-tbbkh" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:44.457873    2411 pod_ready.go:82] duration metric: took 402.639542ms for pod "kube-proxy-tbbkh" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:44.457885    2411 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:44.859203    2411 pod_ready.go:93] pod "kube-scheduler-functional-063000" in "kube-system" namespace has status "Ready":"True"
	I1003 20:07:44.859222    2411 pod_ready.go:82] duration metric: took 401.3285ms for pod "kube-scheduler-functional-063000" in "kube-system" namespace to be "Ready" ...
	I1003 20:07:44.859245    2411 pod_ready.go:39] duration metric: took 3.00501675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1003 20:07:44.859303    2411 api_server.go:52] waiting for apiserver process to appear ...
	I1003 20:07:44.859700    2411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:07:44.879411    2411 api_server.go:72] duration metric: took 3.202159041s to wait for apiserver process to appear ...
	I1003 20:07:44.879419    2411 api_server.go:88] waiting for apiserver healthz status ...
	I1003 20:07:44.879433    2411 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1003 20:07:44.886612    2411 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1003 20:07:44.887690    2411 api_server.go:141] control plane version: v1.31.1
	I1003 20:07:44.887699    2411 api_server.go:131] duration metric: took 8.275ms to wait for apiserver health ...
	I1003 20:07:44.887706    2411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 20:07:45.068122    2411 system_pods.go:59] 8 kube-system pods found
	I1003 20:07:45.068150    2411 system_pods.go:61] "coredns-7c65d6cfc9-5wd9p" [2b8ecfa8-8b75-4548-851d-884a41af5c14] Running
	I1003 20:07:45.068158    2411 system_pods.go:61] "coredns-7c65d6cfc9-b6mwq" [055fe939-dbf3-4f60-899f-b49849453c38] Running
	I1003 20:07:45.068166    2411 system_pods.go:61] "etcd-functional-063000" [17c87a89-26cc-46b9-9c41-671f843baa3a] Running
	I1003 20:07:45.068172    2411 system_pods.go:61] "kube-apiserver-functional-063000" [978317e2-fbc9-4717-8680-e02554a00db3] Running
	I1003 20:07:45.068180    2411 system_pods.go:61] "kube-controller-manager-functional-063000" [67c76316-c943-4916-8361-d0086afa6624] Running
	I1003 20:07:45.068186    2411 system_pods.go:61] "kube-proxy-tbbkh" [e6457328-a33d-4b47-b8ec-4b2ee58d2539] Running
	I1003 20:07:45.068191    2411 system_pods.go:61] "kube-scheduler-functional-063000" [c997a0bb-656f-4c5b-9e13-2049cefa25d9] Running
	I1003 20:07:45.068196    2411 system_pods.go:61] "storage-provisioner" [b5c72771-c572-46a2-b3cc-f40a4c63d36b] Running
	I1003 20:07:45.068206    2411 system_pods.go:74] duration metric: took 180.494542ms to wait for pod list to return data ...
	I1003 20:07:45.068217    2411 default_sa.go:34] waiting for default service account to be created ...
	I1003 20:07:45.258970    2411 default_sa.go:45] found service account: "default"
	I1003 20:07:45.258993    2411 default_sa.go:55] duration metric: took 190.769583ms for default service account to be created ...
	I1003 20:07:45.259009    2411 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 20:07:45.466378    2411 system_pods.go:86] 8 kube-system pods found
	I1003 20:07:45.466400    2411 system_pods.go:89] "coredns-7c65d6cfc9-5wd9p" [2b8ecfa8-8b75-4548-851d-884a41af5c14] Running
	I1003 20:07:45.466412    2411 system_pods.go:89] "coredns-7c65d6cfc9-b6mwq" [055fe939-dbf3-4f60-899f-b49849453c38] Running
	I1003 20:07:45.466418    2411 system_pods.go:89] "etcd-functional-063000" [17c87a89-26cc-46b9-9c41-671f843baa3a] Running
	I1003 20:07:45.466425    2411 system_pods.go:89] "kube-apiserver-functional-063000" [978317e2-fbc9-4717-8680-e02554a00db3] Running
	I1003 20:07:45.466434    2411 system_pods.go:89] "kube-controller-manager-functional-063000" [67c76316-c943-4916-8361-d0086afa6624] Running
	I1003 20:07:45.466440    2411 system_pods.go:89] "kube-proxy-tbbkh" [e6457328-a33d-4b47-b8ec-4b2ee58d2539] Running
	I1003 20:07:45.466445    2411 system_pods.go:89] "kube-scheduler-functional-063000" [c997a0bb-656f-4c5b-9e13-2049cefa25d9] Running
	I1003 20:07:45.466449    2411 system_pods.go:89] "storage-provisioner" [b5c72771-c572-46a2-b3cc-f40a4c63d36b] Running
	I1003 20:07:45.466461    2411 system_pods.go:126] duration metric: took 207.447ms to wait for k8s-apps to be running ...
	I1003 20:07:45.466475    2411 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 20:07:45.466742    2411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:07:45.485505    2411 system_svc.go:56] duration metric: took 19.028166ms WaitForService to wait for kubelet
	I1003 20:07:45.485520    2411 kubeadm.go:582] duration metric: took 3.808273041s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:07:45.485541    2411 node_conditions.go:102] verifying NodePressure condition ...
	I1003 20:07:45.660357    2411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1003 20:07:45.660385    2411 node_conditions.go:123] node cpu capacity is 2
	I1003 20:07:45.660413    2411 node_conditions.go:105] duration metric: took 174.865334ms to run NodePressure ...
	I1003 20:07:45.660442    2411 start.go:241] waiting for startup goroutines ...
	I1003 20:07:45.660459    2411 start.go:246] waiting for cluster config update ...
	I1003 20:07:45.660484    2411 start.go:255] writing updated cluster config ...
	I1003 20:07:45.661883    2411 ssh_runner.go:195] Run: rm -f paused
	I1003 20:07:45.728470    2411 start.go:600] kubectl: 1.30.2, cluster: 1.31.1 (minor skew: 1)
	I1003 20:07:45.732686    2411 out.go:177] * Done! kubectl is now configured to use "functional-063000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 04 03:08:27 functional-063000 dockerd[6151]: time="2024-10-04T03:08:27.099977871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:08:27 functional-063000 dockerd[6151]: time="2024-10-04T03:08:27.099988703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:08:27 functional-063000 dockerd[6151]: time="2024-10-04T03:08:27.100026446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:08:27 functional-063000 cri-dockerd[6401]: time="2024-10-04T03:08:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d62d113e9a4a2d27e84ee628e1cf82dcfcbe599b3d7fd7fabe534058859da45d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 04 03:08:33 functional-063000 cri-dockerd[6401]: time="2024-10-04T03:08:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.511287864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.511534323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.511560319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.511619892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.515952088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.516005621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.516016369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.516049572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 04 03:08:33 functional-063000 dockerd[6145]: time="2024-10-04T03:08:33.544111279Z" level=info msg="ignoring event" container=f2d0dac2fed4b6cea982d329ffb03e081fa32eeef1c59349a7279925cfa35d04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.544504171Z" level=info msg="shim disconnected" id=f2d0dac2fed4b6cea982d329ffb03e081fa32eeef1c59349a7279925cfa35d04 namespace=moby
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.544654063Z" level=warning msg="cleaning up after shim disconnected" id=f2d0dac2fed4b6cea982d329ffb03e081fa32eeef1c59349a7279925cfa35d04 namespace=moby
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.544709429Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 03:08:33 functional-063000 dockerd[6145]: time="2024-10-04T03:08:33.576456189Z" level=info msg="ignoring event" container=1a25abd4d281fef43211ad3f7eb7c523d04af66efd07b5ddbfda3cac6741d279 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.576565130Z" level=info msg="shim disconnected" id=1a25abd4d281fef43211ad3f7eb7c523d04af66efd07b5ddbfda3cac6741d279 namespace=moby
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.576625911Z" level=warning msg="cleaning up after shim disconnected" id=1a25abd4d281fef43211ad3f7eb7c523d04af66efd07b5ddbfda3cac6741d279 namespace=moby
	Oct 04 03:08:33 functional-063000 dockerd[6151]: time="2024-10-04T03:08:33.576630452Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 04 03:08:35 functional-063000 dockerd[6145]: time="2024-10-04T03:08:35.002643436Z" level=info msg="ignoring event" container=d62d113e9a4a2d27e84ee628e1cf82dcfcbe599b3d7fd7fabe534058859da45d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 04 03:08:35 functional-063000 dockerd[6151]: time="2024-10-04T03:08:35.003381813Z" level=info msg="shim disconnected" id=d62d113e9a4a2d27e84ee628e1cf82dcfcbe599b3d7fd7fabe534058859da45d namespace=moby
	Oct 04 03:08:35 functional-063000 dockerd[6151]: time="2024-10-04T03:08:35.003442636Z" level=warning msg="cleaning up after shim disconnected" id=d62d113e9a4a2d27e84ee628e1cf82dcfcbe599b3d7fd7fabe534058859da45d namespace=moby
	Oct 04 03:08:35 functional-063000 dockerd[6151]: time="2024-10-04T03:08:35.003447094Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1a25abd4d281f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 seconds ago        Exited              mount-munger              0                   d62d113e9a4a2       busybox-mount
	f2d0dac2fed4b       72565bf5bbedf                                                                                         2 seconds ago        Exited              echoserver-arm            2                   91ecdda3cb644       hello-node-64b4f8f9ff-wcbhb
	5e8114ffc3183       72565bf5bbedf                                                                                         9 seconds ago        Exited              echoserver-arm            2                   84797b7385a6e       hello-node-connect-65d86f57f4-8d8zr
	35b2dd2c8fb7d       nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0                         25 seconds ago       Running             myfrontend                0                   0535db565762e       sp-pod
	91abf67aa12c2       nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         40 seconds ago       Running             nginx                     0                   5e4f8a785f595       nginx-svc
	ed1bca3eec3f3       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   7f3160cd92f45       storage-provisioner
	6b565fd863c80       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   ee0a8c8432033       coredns-7c65d6cfc9-5wd9p
	84907895c9a4a       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   e55e5e3cb0375       coredns-7c65d6cfc9-b6mwq
	cfa9dd38e60a4       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   449beb961820a       kube-proxy-tbbkh
	28668229ef890       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   7f3160cd92f45       storage-provisioner
	3daaeb1d7c659       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   e7c1d828a5234       kube-controller-manager-functional-063000
	72295fb844517       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   4d3f88c5dcb74       kube-scheduler-functional-063000
	cf30e60031c0f       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   f868641e61500       etcd-functional-063000
	10fac62806495       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   417a519b0a66e       kube-apiserver-functional-063000
	e9570513fd78f       2f6c962e7b831                                                                                         2 minutes ago        Exited              coredns                   1                   31224cb2c4134       coredns-7c65d6cfc9-5wd9p
	897032cb75f9e       2f6c962e7b831                                                                                         2 minutes ago        Exited              coredns                   1                   8a5d3758c4c93       coredns-7c65d6cfc9-b6mwq
	85373af55c4a7       24a140c548c07                                                                                         2 minutes ago        Exited              kube-proxy                1                   419dd3db943af       kube-proxy-tbbkh
	3eadbaf644904       279f381cb3736                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   9819345ee5f38       kube-controller-manager-functional-063000
	cc32810f6b4c4       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   01f81d3f9ff73       etcd-functional-063000
	d2bbffa512ec8       7f8aa378bb47d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   0cfc67f8e1283       kube-scheduler-functional-063000
	
	
	==> coredns [6b565fd863c8] <==
	[INFO] 127.0.0.1:45589 - 58057 "HINFO IN 7325754835914606588.7200806151115027141. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005977591s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1596822710]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:07:04.131) (total time: 30001ms):
	Trace[1596822710]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:07:34.132)
	Trace[1596822710]: [30.00180672s] [30.00180672s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1428957863]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:07:04.131) (total time: 30002ms):
	Trace[1428957863]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:07:34.134)
	Trace[1428957863]: [30.002558618s] [30.002558618s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1848214145]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:07:04.131) (total time: 30002ms):
	Trace[1848214145]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (03:07:34.134)
	Trace[1848214145]: [30.002738209s] [30.002738209s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.1:7938 - 7796 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.0000909s
	[INFO] 10.244.0.1:4493 - 7604 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000143516s
	[INFO] 10.244.0.1:38539 - 22399 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000032494s
	[INFO] 10.244.0.1:50982 - 41971 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001829802s
	[INFO] 10.244.0.1:41255 - 6545 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000069363s
	
	
	==> coredns [84907895c9a4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47905 - 7960 "HINFO IN 1781727134115605634.1861327344142525464. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.006196548s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[35578094]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:07:04.127) (total time: 30000ms):
	Trace[35578094]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (03:07:34.128)
	Trace[35578094]: [30.000952781s] [30.000952781s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1886011373]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:07:04.127) (total time: 30001ms):
	Trace[1886011373]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (03:07:34.128)
	Trace[1886011373]: [30.001490801s] [30.001490801s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1352510276]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (04-Oct-2024 03:07:04.127) (total time: 30001ms):
	Trace[1352510276]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (03:07:34.128)
	Trace[1352510276]: [30.001339915s] [30.001339915s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.1:29499 - 12238 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000080819s
	
	
	==> coredns [897032cb75f9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48855 - 27753 "HINFO IN 1319295185134643914.4145148577253986355. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.00443101s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e9570513fd78] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57657 - 47538 "HINFO IN 7130397788264167639.3243671885050520615. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004109277s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-063000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-063000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=functional-063000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_05_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:05:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-063000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:08:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:08:33 +0000   Fri, 04 Oct 2024 03:05:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:08:33 +0000   Fri, 04 Oct 2024 03:05:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:08:33 +0000   Fri, 04 Oct 2024 03:05:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:08:33 +0000   Fri, 04 Oct 2024 03:05:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-063000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e264d1d422f424c8fa097f54acae553
	  System UUID:                9e264d1d422f424c8fa097f54acae553
	  Boot ID:                    4ec0b906-d8b3-4268-963c-5befe2c57127
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-wcbhb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	  default                     hello-node-connect-65d86f57f4-8d8zr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 coredns-7c65d6cfc9-5wd9p                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m54s
	  kube-system                 coredns-7c65d6cfc9-b6mwq                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m54s
	  kube-system                 etcd-functional-063000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m59s
	  kube-system                 kube-apiserver-functional-063000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-functional-063000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m59s
	  kube-system                 kube-proxy-tbbkh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 kube-scheduler-functional-063000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m59s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (6%)  340Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m53s                  kube-proxy       
	  Normal  Starting                 91s                    kube-proxy       
	  Normal  Starting                 2m25s                  kube-proxy       
	  Normal  Starting                 3m                     kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m59s (x2 over 2m59s)  kubelet          Node functional-063000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s (x2 over 2m59s)  kubelet          Node functional-063000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m59s (x2 over 2m59s)  kubelet          Node functional-063000 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                2m56s                  kubelet          Node functional-063000 status is now: NodeReady
	  Normal  RegisteredNode           2m55s                  node-controller  Node functional-063000 event: Registered Node functional-063000 in Controller
	  Normal  NodeHasSufficientPID     2m29s (x7 over 2m29s)  kubelet          Node functional-063000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m29s (x8 over 2m29s)  kubelet          Node functional-063000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s (x8 over 2m29s)  kubelet          Node functional-063000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m23s                  node-controller  Node functional-063000 event: Registered Node functional-063000 in Controller
	  Normal  Starting                 97s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s (x8 over 97s)      kubelet          Node functional-063000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x8 over 97s)      kubelet          Node functional-063000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x7 over 97s)      kubelet          Node functional-063000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           90s                    node-controller  Node functional-063000 event: Registered Node functional-063000 in Controller
	
	
	==> dmesg <==
	[ +14.172737] systemd-fstab-generator[5146]: Ignoring "noauto" option for root device
	[  +0.058545] kauditd_printk_skb: 56 callbacks suppressed
	[ +20.141701] systemd-fstab-generator[5610]: Ignoring "noauto" option for root device
	[  +0.059053] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.109070] systemd-fstab-generator[5645]: Ignoring "noauto" option for root device
	[  +0.105891] systemd-fstab-generator[5657]: Ignoring "noauto" option for root device
	[  +0.112820] systemd-fstab-generator[5671]: Ignoring "noauto" option for root device
	[  +5.139160] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.367077] systemd-fstab-generator[6354]: Ignoring "noauto" option for root device
	[  +0.089437] systemd-fstab-generator[6366]: Ignoring "noauto" option for root device
	[  +0.096348] systemd-fstab-generator[6378]: Ignoring "noauto" option for root device
	[  +0.105984] systemd-fstab-generator[6393]: Ignoring "noauto" option for root device
	[  +0.223692] systemd-fstab-generator[6557]: Ignoring "noauto" option for root device
	[  +1.017844] systemd-fstab-generator[6680]: Ignoring "noauto" option for root device
	[Oct 4 03:07] kauditd_printk_skb: 199 callbacks suppressed
	[ +12.759395] kauditd_printk_skb: 53 callbacks suppressed
	[ +25.282051] systemd-fstab-generator[8078]: Ignoring "noauto" option for root device
	[  +5.465585] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.061126] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.008212] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 4 03:08] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.293419] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.945230] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.792177] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.998989] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [cc32810f6b4c] <==
	{"level":"info","ts":"2024-10-04T03:06:08.588788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-04T03:06:08.588855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-10-04T03:06:08.588892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-10-04T03:06:08.588909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-04T03:06:08.588943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-10-04T03:06:08.588967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-04T03:06:08.593946Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-063000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:06:08.594010Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:06:08.594332Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:06:08.594362Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:06:08.594391Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:06:08.595807Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:06:08.596099Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:06:08.597638Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:06:08.597919Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-04T03:06:44.417488Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-04T03:06:44.417526Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-063000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-10-04T03:06:44.417582Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:06:44.417625Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:06:44.436497Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-04T03:06:44.436519Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-04T03:06:44.436539Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-10-04T03:06:44.441245Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-04T03:06:44.441306Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-04T03:06:44.441311Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-063000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [cf30e60031c0] <==
	{"level":"info","ts":"2024-10-04T03:06:59.490703Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-10-04T03:06:59.490765Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:06:59.490797Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:06:59.497852Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:06:59.498468Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-04T03:06:59.498553Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-04T03:06:59.498573Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-04T03:06:59.499500Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-04T03:06:59.501359Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-04T03:07:01.270670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-04T03:07:01.270876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-04T03:07:01.270948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-04T03:07:01.270982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-10-04T03:07:01.271000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-04T03:07:01.271026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-10-04T03:07:01.271069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-04T03:07:01.276691Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-063000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:07:01.277265Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:07:01.278076Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:07:01.279829Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:07:01.281566Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:07:01.282575Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-04T03:07:01.283391Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:07:01.283559Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:07:01.284146Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 03:08:35 up 3 min,  0 users,  load average: 0.66, 0.40, 0.17
	Linux functional-063000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [10fac6280649] <==
	I1004 03:07:01.884283       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1004 03:07:01.884296       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1004 03:07:01.886718       1 aggregator.go:171] initial CRD sync complete...
	I1004 03:07:01.886744       1 autoregister_controller.go:144] Starting autoregister controller
	I1004 03:07:01.886775       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 03:07:01.886791       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:07:01.894544       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1004 03:07:01.894677       1 policy_source.go:224] refreshing policies
	I1004 03:07:01.899243       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:07:02.787004       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1004 03:07:02.885978       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I1004 03:07:02.886526       1 controller.go:615] quota admission added evaluator for: endpoints
	I1004 03:07:02.888062       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:07:03.027351       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1004 03:07:03.032360       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1004 03:07:03.042050       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1004 03:07:03.049170       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:07:03.051092       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:07:47.288913       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.185.160"}
	I1004 03:07:52.009062       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.87.140"}
	I1004 03:08:02.452911       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1004 03:08:02.498548       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.55.202"}
	E1004 03:08:09.035941       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49685: use of closed network connection
	E1004 03:08:17.652191       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49695: use of closed network connection
	I1004 03:08:17.732938       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.183.56"}
	
	
	==> kube-controller-manager [3daaeb1d7c65] <==
	I1004 03:07:05.766249       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:07:05.841142       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:07:05.841424       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:07:35.789701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.398691ms"
	I1004 03:07:35.790956       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.7µs"
	I1004 03:07:40.633654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.425361ms"
	I1004 03:07:40.633998       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.156µs"
	I1004 03:08:02.463964       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="8.390104ms"
	I1004 03:08:02.469190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="5.1955ms"
	I1004 03:08:02.469346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.412µs"
	I1004 03:08:02.472785       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.245µs"
	I1004 03:08:03.290932       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-063000"
	I1004 03:08:09.458431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="23.412µs"
	I1004 03:08:10.512151       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="38.576µs"
	I1004 03:08:11.576395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="35.91µs"
	I1004 03:08:17.699724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="9.215704ms"
	I1004 03:08:17.705370       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="5.620824ms"
	I1004 03:08:17.705605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="24.663µs"
	I1004 03:08:17.708194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="15.414µs"
	I1004 03:08:18.663003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="30.328µs"
	I1004 03:08:19.696059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="178.636µs"
	I1004 03:08:26.787499       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="26.371µs"
	I1004 03:08:33.464661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="24.913µs"
	I1004 03:08:33.776689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-063000"
	I1004 03:08:33.882765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="28.329µs"
	
	
	==> kube-controller-manager [3eadbaf64490] <==
	I1004 03:06:12.472714       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1004 03:06:12.473072       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1004 03:06:12.478414       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1004 03:06:12.478460       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1004 03:06:12.478533       1 shared_informer.go:320] Caches are synced for service account
	I1004 03:06:12.478672       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1004 03:06:12.478450       1 shared_informer.go:320] Caches are synced for stateful set
	I1004 03:06:12.478456       1 shared_informer.go:320] Caches are synced for deployment
	I1004 03:06:12.479468       1 shared_informer.go:320] Caches are synced for ephemeral
	I1004 03:06:12.479523       1 shared_informer.go:320] Caches are synced for TTL
	I1004 03:06:12.480347       1 shared_informer.go:320] Caches are synced for disruption
	I1004 03:06:12.480428       1 shared_informer.go:320] Caches are synced for PVC protection
	I1004 03:06:12.557802       1 shared_informer.go:320] Caches are synced for persistent volume
	I1004 03:06:12.581547       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:06:12.627712       1 shared_informer.go:320] Caches are synced for attach detach
	I1004 03:06:12.641879       1 shared_informer.go:320] Caches are synced for resource quota
	I1004 03:06:12.678393       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1004 03:06:12.735284       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="303.016468ms"
	I1004 03:06:12.735746       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.599µs"
	I1004 03:06:13.098663       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:06:13.153047       1 shared_informer.go:320] Caches are synced for garbage collector
	I1004 03:06:13.153262       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1004 03:06:13.747741       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.846225ms"
	I1004 03:06:13.747826       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="47.398µs"
	I1004 03:06:40.319983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-063000"
	
	
	==> kube-proxy [85373af55c4a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:06:09.832940       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:06:09.838236       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1004 03:06:09.838265       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:06:09.875432       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:06:09.875452       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:06:09.875467       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:06:09.876279       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:06:09.876355       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:06:09.876360       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:06:09.876962       1 config.go:199] "Starting service config controller"
	I1004 03:06:09.876965       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:06:09.876972       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:06:09.876974       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:06:09.877096       1 config.go:328] "Starting node config controller"
	I1004 03:06:09.877099       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:06:09.977080       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:06:09.977130       1 shared_informer.go:320] Caches are synced for node config
	I1004 03:06:09.977080       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [cfa9dd38e60a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1004 03:07:04.171952       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1004 03:07:04.175283       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1004 03:07:04.175390       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1004 03:07:04.182572       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1004 03:07:04.182588       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 03:07:04.182600       1 server_linux.go:169] "Using iptables Proxier"
	I1004 03:07:04.183171       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1004 03:07:04.183249       1 server.go:483] "Version info" version="v1.31.1"
	I1004 03:07:04.183260       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:07:04.183706       1 config.go:199] "Starting service config controller"
	I1004 03:07:04.183715       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1004 03:07:04.183727       1 config.go:105] "Starting endpoint slice config controller"
	I1004 03:07:04.183729       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1004 03:07:04.183898       1 config.go:328] "Starting node config controller"
	I1004 03:07:04.183901       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1004 03:07:04.284294       1 shared_informer.go:320] Caches are synced for node config
	I1004 03:07:04.284307       1 shared_informer.go:320] Caches are synced for service config
	I1004 03:07:04.284317       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [72295fb84451] <==
	I1004 03:07:00.002053       1 serving.go:386] Generated self-signed cert in-memory
	W1004 03:07:01.797700       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 03:07:01.797716       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 03:07:01.797720       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 03:07:01.797723       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 03:07:01.831683       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 03:07:01.831698       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:07:01.832692       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 03:07:01.832739       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:07:01.832831       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 03:07:01.832860       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 03:07:01.933844       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d2bbffa512ec] <==
	I1004 03:06:07.924869       1 serving.go:386] Generated self-signed cert in-memory
	W1004 03:06:09.123517       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 03:06:09.123707       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 03:06:09.123736       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 03:06:09.123757       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 03:06:09.154053       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1004 03:06:09.154168       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:06:09.155138       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 03:06:09.155165       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 03:06:09.155271       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1004 03:06:09.155343       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1004 03:06:09.255819       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1004 03:06:44.413313       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 04 03:08:11 functional-063000 kubelet[6687]: I1004 03:08:11.563778    6687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.871696061 podStartE2EDuration="2.56376018s" podCreationTimestamp="2024-10-04 03:08:09 +0000 UTC" firstStartedPulling="2024-10-04 03:08:09.94412889 +0000 UTC m=+71.541554747" lastFinishedPulling="2024-10-04 03:08:10.636193009 +0000 UTC m=+72.233618866" observedRunningTime="2024-10-04 03:08:11.563569255 +0000 UTC m=+73.160995113" watchObservedRunningTime="2024-10-04 03:08:11.56376018 +0000 UTC m=+73.161186038"
	Oct 04 03:08:11 functional-063000 kubelet[6687]: I1004 03:08:11.565884    6687 scope.go:117] "RemoveContainer" containerID="2d7224398f2f2437f188dbf7282baf697b8ef8628036de90fb443b2345f88b61"
	Oct 04 03:08:11 functional-063000 kubelet[6687]: E1004 03:08:11.566026    6687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-8d8zr_default(58529b9e-0df9-4f5a-a505-2c0164dfcb9b)\"" pod="default/hello-node-connect-65d86f57f4-8d8zr" podUID="58529b9e-0df9-4f5a-a505-2c0164dfcb9b"
	Oct 04 03:08:17 functional-063000 kubelet[6687]: I1004 03:08:17.824074    6687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lzhq\" (UniqueName: \"kubernetes.io/projected/9a898e13-3f42-4a57-b032-2cf98100309a-kube-api-access-9lzhq\") pod \"hello-node-64b4f8f9ff-wcbhb\" (UID: \"9a898e13-3f42-4a57-b032-2cf98100309a\") " pod="default/hello-node-64b4f8f9ff-wcbhb"
	Oct 04 03:08:18 functional-063000 kubelet[6687]: I1004 03:08:18.653264    6687 scope.go:117] "RemoveContainer" containerID="68d792c41c49639ed4ec5513689a025b5f0c5bb44deb14cdba01b0c95a3dc804"
	Oct 04 03:08:19 functional-063000 kubelet[6687]: I1004 03:08:19.681536    6687 scope.go:117] "RemoveContainer" containerID="68d792c41c49639ed4ec5513689a025b5f0c5bb44deb14cdba01b0c95a3dc804"
	Oct 04 03:08:19 functional-063000 kubelet[6687]: I1004 03:08:19.681925    6687 scope.go:117] "RemoveContainer" containerID="a954ab9da9e12bb65a7c41cd8b47af9f6f30fda70132c197865ca6c2e2957408"
	Oct 04 03:08:19 functional-063000 kubelet[6687]: E1004 03:08:19.682106    6687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-wcbhb_default(9a898e13-3f42-4a57-b032-2cf98100309a)\"" pod="default/hello-node-64b4f8f9ff-wcbhb" podUID="9a898e13-3f42-4a57-b032-2cf98100309a"
	Oct 04 03:08:26 functional-063000 kubelet[6687]: I1004 03:08:26.462277    6687 scope.go:117] "RemoveContainer" containerID="2d7224398f2f2437f188dbf7282baf697b8ef8628036de90fb443b2345f88b61"
	Oct 04 03:08:26 functional-063000 kubelet[6687]: I1004 03:08:26.780592    6687 scope.go:117] "RemoveContainer" containerID="2d7224398f2f2437f188dbf7282baf697b8ef8628036de90fb443b2345f88b61"
	Oct 04 03:08:26 functional-063000 kubelet[6687]: I1004 03:08:26.780762    6687 scope.go:117] "RemoveContainer" containerID="5e8114ffc318336de54e66f8f817ce9b6c7bd27670dd436d2c6f2b6a11b6bbaa"
	Oct 04 03:08:26 functional-063000 kubelet[6687]: E1004 03:08:26.780834    6687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-8d8zr_default(58529b9e-0df9-4f5a-a505-2c0164dfcb9b)\"" pod="default/hello-node-connect-65d86f57f4-8d8zr" podUID="58529b9e-0df9-4f5a-a505-2c0164dfcb9b"
	Oct 04 03:08:26 functional-063000 kubelet[6687]: I1004 03:08:26.809173    6687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f-test-volume\") pod \"busybox-mount\" (UID: \"69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f\") " pod="default/busybox-mount"
	Oct 04 03:08:26 functional-063000 kubelet[6687]: I1004 03:08:26.809196    6687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhzk\" (UniqueName: \"kubernetes.io/projected/69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f-kube-api-access-pdhzk\") pod \"busybox-mount\" (UID: \"69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f\") " pod="default/busybox-mount"
	Oct 04 03:08:33 functional-063000 kubelet[6687]: I1004 03:08:33.459551    6687 scope.go:117] "RemoveContainer" containerID="a954ab9da9e12bb65a7c41cd8b47af9f6f30fda70132c197865ca6c2e2957408"
	Oct 04 03:08:33 functional-063000 kubelet[6687]: I1004 03:08:33.874869    6687 scope.go:117] "RemoveContainer" containerID="a954ab9da9e12bb65a7c41cd8b47af9f6f30fda70132c197865ca6c2e2957408"
	Oct 04 03:08:33 functional-063000 kubelet[6687]: I1004 03:08:33.875366    6687 scope.go:117] "RemoveContainer" containerID="f2d0dac2fed4b6cea982d329ffb03e081fa32eeef1c59349a7279925cfa35d04"
	Oct 04 03:08:33 functional-063000 kubelet[6687]: E1004 03:08:33.875495    6687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-wcbhb_default(9a898e13-3f42-4a57-b032-2cf98100309a)\"" pod="default/hello-node-64b4f8f9ff-wcbhb" podUID="9a898e13-3f42-4a57-b032-2cf98100309a"
	Oct 04 03:08:35 functional-063000 kubelet[6687]: I1004 03:08:35.094459    6687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f-test-volume\") pod \"69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f\" (UID: \"69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f\") "
	Oct 04 03:08:35 functional-063000 kubelet[6687]: I1004 03:08:35.094492    6687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdhzk\" (UniqueName: \"kubernetes.io/projected/69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f-kube-api-access-pdhzk\") pod \"69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f\" (UID: \"69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f\") "
	Oct 04 03:08:35 functional-063000 kubelet[6687]: I1004 03:08:35.094686    6687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f-test-volume" (OuterVolumeSpecName: "test-volume") pod "69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f" (UID: "69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 04 03:08:35 functional-063000 kubelet[6687]: I1004 03:08:35.097394    6687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f-kube-api-access-pdhzk" (OuterVolumeSpecName: "kube-api-access-pdhzk") pod "69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f" (UID: "69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f"). InnerVolumeSpecName "kube-api-access-pdhzk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 04 03:08:35 functional-063000 kubelet[6687]: I1004 03:08:35.194978    6687 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pdhzk\" (UniqueName: \"kubernetes.io/projected/69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f-kube-api-access-pdhzk\") on node \"functional-063000\" DevicePath \"\""
	Oct 04 03:08:35 functional-063000 kubelet[6687]: I1004 03:08:35.195020    6687 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f-test-volume\") on node \"functional-063000\" DevicePath \"\""
	Oct 04 03:08:35 functional-063000 kubelet[6687]: I1004 03:08:35.935813    6687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d62d113e9a4a2d27e84ee628e1cf82dcfcbe599b3d7fd7fabe534058859da45d"
	
	
	==> storage-provisioner [28668229ef89] <==
	I1004 03:07:04.133789       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1004 03:07:04.138145       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [ed1bca3eec3f] <==
	I1004 03:07:16.564646       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 03:07:16.567945       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 03:07:16.568288       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 03:07:33.969572       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 03:07:33.969674       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-063000_0dab70ca-54b2-4d54-9344-b38c19c0cf4b!
	I1004 03:07:33.970561       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"383624dc-7963-4890-ace5-0dcaf0e55160", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-063000_0dab70ca-54b2-4d54-9344-b38c19c0cf4b became leader
	I1004 03:07:34.070436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-063000_0dab70ca-54b2-4d54-9344-b38c19c0cf4b!
	I1004 03:07:56.875419       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1004 03:07:56.875567       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    2ea3afc4-1a15-4fcf-9872-a47d57a8f86e 338 0 2024-10-04 03:05:41 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-04 03:05:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-86b25b98-85a2-41bb-bdec-4f3e43b32e6a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  86b25b98-85a2-41bb-bdec-4f3e43b32e6a 715 0 2024-10-04 03:07:56 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-04 03:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-04 03:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1004 03:07:56.876021       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-86b25b98-85a2-41bb-bdec-4f3e43b32e6a" provisioned
	I1004 03:07:56.876068       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1004 03:07:56.876108       1 volume_store.go:212] Trying to save persistentvolume "pvc-86b25b98-85a2-41bb-bdec-4f3e43b32e6a"
	I1004 03:07:56.876847       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"86b25b98-85a2-41bb-bdec-4f3e43b32e6a", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1004 03:07:56.879947       1 volume_store.go:219] persistentvolume "pvc-86b25b98-85a2-41bb-bdec-4f3e43b32e6a" saved
	I1004 03:07:56.880423       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"86b25b98-85a2-41bb-bdec-4f3e43b32e6a", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-86b25b98-85a2-41bb-bdec-4f3e43b32e6a
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-063000 -n functional-063000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-063000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-063000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-063000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-063000/192.168.105.4
	Start Time:       Thu, 03 Oct 2024 20:08:26 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  mount-munger:
	    Container ID:  docker://1a25abd4d281fef43211ad3f7eb7c523d04af66efd07b5ddbfda3cac6741d279
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 03 Oct 2024 20:08:33 -0700
	      Finished:     Thu, 03 Oct 2024 20:08:33 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pdhzk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pdhzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  9s    default-scheduler  Successfully assigned default/busybox-mount to functional-063000
	  Normal  Pulling    9s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 6.332s (6.332s including waiting). Image size: 3547125 bytes.
	  Normal  Created    3s    kubelet            Created container mount-munger
	  Normal  Started    3s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (33.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (162.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-darwin-arm64 -p ha-006000 node stop m02 -v=7 --alsologtostderr: (12.195080917s)
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
E1003 20:15:35.524872    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: (1m15.056614666s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 3 (1m15.039284208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 20:17:05.678245    3034 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E1003 20:17:05.678252    3034 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (162.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (150.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E1003 20:17:38.521472    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:17:51.678048    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:18:19.396254    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:392: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m15.061547916s)
ha_test.go:415: expected profile "ha-006000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\
":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"
KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\"
:false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",
\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 3 (1m15.073978041s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 20:19:35.839049    3063 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E1003 20:19:35.839097    3063 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (150.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (185.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.133341834s)

                                                
                                                
-- stdout --
	* Starting "ha-006000-m02" control-plane node in "ha-006000" cluster
	* Restarting existing qemu2 VM for "ha-006000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-006000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:19:35.914686    3073 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:19:35.915064    3073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:19:35.915069    3073 out.go:358] Setting ErrFile to fd 2...
	I1003 20:19:35.915072    3073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:19:35.915245    3073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:19:35.915589    3073 mustload.go:65] Loading cluster: ha-006000
	I1003 20:19:35.915942    3073 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W1003 20:19:35.916249    3073 host.go:58] "ha-006000-m02" host status: Stopped
	I1003 20:19:35.920714    3073 out.go:177] * Starting "ha-006000-m02" control-plane node in "ha-006000" cluster
	I1003 20:19:35.924635    3073 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:19:35.924653    3073 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:19:35.924661    3073 cache.go:56] Caching tarball of preloaded images
	I1003 20:19:35.924767    3073 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:19:35.924776    3073 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:19:35.924848    3073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/ha-006000/config.json ...
	I1003 20:19:35.925309    3073 start.go:360] acquireMachinesLock for ha-006000-m02: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:19:35.925397    3073 start.go:364] duration metric: took 54.083µs to acquireMachinesLock for "ha-006000-m02"
	I1003 20:19:35.925407    3073 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:19:35.925412    3073 fix.go:54] fixHost starting: m02
	I1003 20:19:35.925544    3073 fix.go:112] recreateIfNeeded on ha-006000-m02: state=Stopped err=<nil>
	W1003 20:19:35.925551    3073 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:19:35.928572    3073 out.go:177] * Restarting existing qemu2 VM for "ha-006000-m02" ...
	I1003 20:19:35.932582    3073 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:19:35.932631    3073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:23:3d:08:fb:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/disk.qcow2
	I1003 20:19:35.936246    3073 main.go:141] libmachine: STDOUT: 
	I1003 20:19:35.936282    3073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:19:35.936321    3073 fix.go:56] duration metric: took 10.906875ms for fixHost
	I1003 20:19:35.936325    3073 start.go:83] releasing machines lock for "ha-006000-m02", held for 10.923334ms
	W1003 20:19:35.936334    3073 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:19:35.936379    3073 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:19:35.936384    3073 start.go:729] Will try again in 5 seconds ...
	I1003 20:19:40.938626    3073 start.go:360] acquireMachinesLock for ha-006000-m02: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:19:40.939159    3073 start.go:364] duration metric: took 405.083µs to acquireMachinesLock for "ha-006000-m02"
	I1003 20:19:40.939294    3073 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:19:40.939309    3073 fix.go:54] fixHost starting: m02
	I1003 20:19:40.939941    3073 fix.go:112] recreateIfNeeded on ha-006000-m02: state=Stopped err=<nil>
	W1003 20:19:40.939961    3073 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:19:40.941602    3073 out.go:177] * Restarting existing qemu2 VM for "ha-006000-m02" ...
	I1003 20:19:40.945355    3073 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:19:40.945546    3073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:23:3d:08:fb:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/disk.qcow2
	I1003 20:19:40.953284    3073 main.go:141] libmachine: STDOUT: 
	I1003 20:19:40.953338    3073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:19:40.953403    3073 fix.go:56] duration metric: took 14.092917ms for fixHost
	I1003 20:19:40.953415    3073 start.go:83] releasing machines lock for "ha-006000-m02", held for 14.239375ms
	W1003 20:19:40.953587    3073 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:19:40.958421    3073 out.go:201] 
	W1003 20:19:40.962355    3073 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:19:40.962369    3073 out.go:270] * 
	* 
	W1003 20:19:40.967220    3073 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:19:40.972411    3073 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1003 20:19:35.914686    3073 out.go:345] Setting OutFile to fd 1 ...
I1003 20:19:35.915064    3073 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:19:35.915069    3073 out.go:358] Setting ErrFile to fd 2...
I1003 20:19:35.915072    3073 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:19:35.915245    3073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
I1003 20:19:35.915589    3073 mustload.go:65] Loading cluster: ha-006000
I1003 20:19:35.915942    3073 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W1003 20:19:35.916249    3073 host.go:58] "ha-006000-m02" host status: Stopped
I1003 20:19:35.920714    3073 out.go:177] * Starting "ha-006000-m02" control-plane node in "ha-006000" cluster
I1003 20:19:35.924635    3073 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1003 20:19:35.924653    3073 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1003 20:19:35.924661    3073 cache.go:56] Caching tarball of preloaded images
I1003 20:19:35.924767    3073 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1003 20:19:35.924776    3073 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1003 20:19:35.924848    3073 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/ha-006000/config.json ...
I1003 20:19:35.925309    3073 start.go:360] acquireMachinesLock for ha-006000-m02: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1003 20:19:35.925397    3073 start.go:364] duration metric: took 54.083µs to acquireMachinesLock for "ha-006000-m02"
I1003 20:19:35.925407    3073 start.go:96] Skipping create...Using existing machine configuration
I1003 20:19:35.925412    3073 fix.go:54] fixHost starting: m02
I1003 20:19:35.925544    3073 fix.go:112] recreateIfNeeded on ha-006000-m02: state=Stopped err=<nil>
W1003 20:19:35.925551    3073 fix.go:138] unexpected machine state, will restart: <nil>
I1003 20:19:35.928572    3073 out.go:177] * Restarting existing qemu2 VM for "ha-006000-m02" ...
I1003 20:19:35.932582    3073 qemu.go:418] Using hvf for hardware acceleration
I1003 20:19:35.932631    3073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:23:3d:08:fb:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/disk.qcow2
I1003 20:19:35.936246    3073 main.go:141] libmachine: STDOUT: 
I1003 20:19:35.936282    3073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1003 20:19:35.936321    3073 fix.go:56] duration metric: took 10.906875ms for fixHost
I1003 20:19:35.936325    3073 start.go:83] releasing machines lock for "ha-006000-m02", held for 10.923334ms
W1003 20:19:35.936334    3073 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1003 20:19:35.936379    3073 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1003 20:19:35.936384    3073 start.go:729] Will try again in 5 seconds ...
I1003 20:19:40.938626    3073 start.go:360] acquireMachinesLock for ha-006000-m02: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1003 20:19:40.939159    3073 start.go:364] duration metric: took 405.083µs to acquireMachinesLock for "ha-006000-m02"
I1003 20:19:40.939294    3073 start.go:96] Skipping create...Using existing machine configuration
I1003 20:19:40.939309    3073 fix.go:54] fixHost starting: m02
I1003 20:19:40.939941    3073 fix.go:112] recreateIfNeeded on ha-006000-m02: state=Stopped err=<nil>
W1003 20:19:40.939961    3073 fix.go:138] unexpected machine state, will restart: <nil>
I1003 20:19:40.941602    3073 out.go:177] * Restarting existing qemu2 VM for "ha-006000-m02" ...
I1003 20:19:40.945355    3073 qemu.go:418] Using hvf for hardware acceleration
I1003 20:19:40.945546    3073 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:23:3d:08:fb:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000-m02/disk.qcow2
I1003 20:19:40.953284    3073 main.go:141] libmachine: STDOUT: 
I1003 20:19:40.953338    3073 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1003 20:19:40.953403    3073 fix.go:56] duration metric: took 14.092917ms for fixHost
I1003 20:19:40.953415    3073 start.go:83] releasing machines lock for "ha-006000-m02", held for 14.239375ms
W1003 20:19:40.953587    3073 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1003 20:19:40.958421    3073 out.go:201] 
W1003 20:19:40.962355    3073 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1003 20:19:40.962369    3073 out.go:270] * 
* 
W1003 20:19:40.967220    3073 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1003 20:19:40.972411    3073 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-006000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: (1m15.065704583s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
ha_test.go:450: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (30.080709416s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: i/o timeout

                                                
                                                
** /stderr **
ha_test.go:452: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
E1003 20:22:38.524467    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 3 (1m15.072791083s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 20:22:41.197982    3095 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E1003 20:22:41.197990    3095 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (185.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E1003 20:22:51.679182    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m15.053711875s)
ha_test.go:309: expected profile "ha-006000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1
,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":fa
lse,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"M
ountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
E1003 20:24:01.618113    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 3 (1m15.042236375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 20:25:11.289990    3114 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E1003 20:25:11.290028    3114 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-006000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-006000 -v=7 --alsologtostderr
E1003 20:27:38.522449    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:27:51.676346    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:29:14.758495    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-006000 -v=7 --alsologtostderr: (5m27.159544166s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-006000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-006000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.225443625s)

                                                
                                                
-- stdout --
	* [ha-006000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-006000" primary control-plane node in "ha-006000" cluster
	* Restarting existing qemu2 VM for "ha-006000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-006000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:30:38.593549    3455 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:30:38.593748    3455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:38.593752    3455 out.go:358] Setting ErrFile to fd 2...
	I1003 20:30:38.593755    3455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:38.593901    3455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:30:38.595287    3455 out.go:352] Setting JSON to false
	I1003 20:30:38.617620    3455 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3609,"bootTime":1728009029,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:30:38.617692    3455 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:30:38.622205    3455 out.go:177] * [ha-006000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:30:38.630268    3455 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:30:38.630295    3455 notify.go:220] Checking for updates...
	I1003 20:30:38.637189    3455 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:30:38.640137    3455 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:30:38.643200    3455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:30:38.646224    3455 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:30:38.647482    3455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:30:38.650558    3455 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:30:38.650608    3455 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:30:38.655216    3455 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:30:38.660205    3455 start.go:297] selected driver: qemu2
	I1003 20:30:38.660210    3455 start.go:901] validating driver "qemu2" against &{Name:ha-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:ha-006000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass
:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:30:38.660276    3455 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:30:38.663191    3455 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:30:38.663216    3455 cni.go:84] Creating CNI manager for ""
	I1003 20:30:38.663246    3455 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1003 20:30:38.663291    3455 start.go:340] cluster config:
	{Name:ha-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-006000 Namespace:default APIServerHAVIP:192.168.
105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fals
e inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:30:38.668608    3455 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:30:38.677179    3455 out.go:177] * Starting "ha-006000" primary control-plane node in "ha-006000" cluster
	I1003 20:30:38.681210    3455 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:30:38.681223    3455 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:30:38.681230    3455 cache.go:56] Caching tarball of preloaded images
	I1003 20:30:38.681302    3455 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:30:38.681307    3455 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:30:38.681386    3455 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/ha-006000/config.json ...
	I1003 20:30:38.681823    3455 start.go:360] acquireMachinesLock for ha-006000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:30:38.681873    3455 start.go:364] duration metric: took 43.375µs to acquireMachinesLock for "ha-006000"
	I1003 20:30:38.681882    3455 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:30:38.681887    3455 fix.go:54] fixHost starting: 
	I1003 20:30:38.682008    3455 fix.go:112] recreateIfNeeded on ha-006000: state=Stopped err=<nil>
	W1003 20:30:38.682017    3455 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:30:38.686174    3455 out.go:177] * Restarting existing qemu2 VM for "ha-006000" ...
	I1003 20:30:38.694243    3455 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:30:38.694282    3455 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:46:7f:ca:71:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/disk.qcow2
	I1003 20:30:38.696381    3455 main.go:141] libmachine: STDOUT: 
	I1003 20:30:38.696400    3455 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:30:38.696429    3455 fix.go:56] duration metric: took 14.540542ms for fixHost
	I1003 20:30:38.696434    3455 start.go:83] releasing machines lock for "ha-006000", held for 14.557292ms
	W1003 20:30:38.696441    3455 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:30:38.696491    3455 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:30:38.696495    3455 start.go:729] Will try again in 5 seconds ...
	I1003 20:30:43.698600    3455 start.go:360] acquireMachinesLock for ha-006000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:30:43.698974    3455 start.go:364] duration metric: took 309.125µs to acquireMachinesLock for "ha-006000"
	I1003 20:30:43.699078    3455 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:30:43.699099    3455 fix.go:54] fixHost starting: 
	I1003 20:30:43.699776    3455 fix.go:112] recreateIfNeeded on ha-006000: state=Stopped err=<nil>
	W1003 20:30:43.699801    3455 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:30:43.704229    3455 out.go:177] * Restarting existing qemu2 VM for "ha-006000" ...
	I1003 20:30:43.708169    3455 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:30:43.708318    3455 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:46:7f:ca:71:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/disk.qcow2
	I1003 20:30:43.718718    3455 main.go:141] libmachine: STDOUT: 
	I1003 20:30:43.718825    3455 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:30:43.718923    3455 fix.go:56] duration metric: took 19.824375ms for fixHost
	I1003 20:30:43.718941    3455 start.go:83] releasing machines lock for "ha-006000", held for 19.947791ms
	W1003 20:30:43.719168    3455 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:30:43.728144    3455 out.go:201] 
	W1003 20:30:43.731134    3455 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:30:43.731167    3455 out.go:270] * 
	* 
	W1003 20:30:43.733766    3455 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:30:43.742168    3455 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-006000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-006000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (34.916292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.388583ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-006000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-006000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:30:43.888722    3468 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:30:43.888988    3468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:43.888991    3468 out.go:358] Setting ErrFile to fd 2...
	I1003 20:30:43.888994    3468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:43.889119    3468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:30:43.889359    3468 mustload.go:65] Loading cluster: ha-006000
	I1003 20:30:43.889590    3468 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W1003 20:30:43.889907    3468 out.go:270] ! The control-plane node ha-006000 host is not running (will try others): state=Stopped
	! The control-plane node ha-006000 host is not running (will try others): state=Stopped
	W1003 20:30:43.890021    3468 out.go:270] ! The control-plane node ha-006000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-006000-m02 host is not running (will try others): state=Stopped
	I1003 20:30:43.894273    3468 out.go:177] * The control-plane node ha-006000-m03 host is not running: state=Stopped
	I1003 20:30:43.897232    3468 out.go:177]   To start a cluster, run: "minikube start -p ha-006000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-006000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (31.286042ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-006000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-006000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-006000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:30:43.930161    3470 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:30:43.930359    3470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:43.930362    3470 out.go:358] Setting ErrFile to fd 2...
	I1003 20:30:43.930372    3470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:30:43.930488    3470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:30:43.930628    3470 out.go:352] Setting JSON to false
	I1003 20:30:43.930643    3470 mustload.go:65] Loading cluster: ha-006000
	I1003 20:30:43.930686    3470 notify.go:220] Checking for updates...
	I1003 20:30:43.930881    3470 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:30:43.930890    3470 status.go:174] checking status of ha-006000 ...
	I1003 20:30:43.931125    3470 status.go:371] ha-006000 host status = "Stopped" (err=<nil>)
	I1003 20:30:43.931128    3470 status.go:384] host is not running, skipping remaining checks
	I1003 20:30:43.931130    3470 status.go:176] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:43.931140    3470 status.go:174] checking status of ha-006000-m02 ...
	I1003 20:30:43.931225    3470 status.go:371] ha-006000-m02 host status = "Stopped" (err=<nil>)
	I1003 20:30:43.931227    3470 status.go:384] host is not running, skipping remaining checks
	I1003 20:30:43.931229    3470 status.go:176] ha-006000-m02 status: &{Name:ha-006000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:43.931233    3470 status.go:174] checking status of ha-006000-m03 ...
	I1003 20:30:43.931320    3470 status.go:371] ha-006000-m03 host status = "Stopped" (err=<nil>)
	I1003 20:30:43.931322    3470 status.go:384] host is not running, skipping remaining checks
	I1003 20:30:43.931324    3470 status.go:176] ha-006000-m03 status: &{Name:ha-006000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:30:43.931327    3470 status.go:174] checking status of ha-006000-m04 ...
	I1003 20:30:43.931430    3470 status.go:371] ha-006000-m04 host status = "Stopped" (err=<nil>)
	I1003 20:30:43.931433    3470 status.go:384] host is not running, skipping remaining checks
	I1003 20:30:43.931434    3470 status.go:176] ha-006000-m04 status: &{Name:ha-006000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (31.42025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-006000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,
\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"log
viewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP
\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (31.543875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (300.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 stop -v=7 --alsologtostderr
E1003 20:32:38.537381    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:32:51.693125    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-006000 stop -v=7 --alsologtostderr: (5m0.130321583s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (68.971709ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-006000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-006000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-006000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:35:44.261028    3498 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:35:44.262492    3498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:44.262496    3498 out.go:358] Setting ErrFile to fd 2...
	I1003 20:35:44.262499    3498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:44.262681    3498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:35:44.262848    3498 out.go:352] Setting JSON to false
	I1003 20:35:44.262860    3498 mustload.go:65] Loading cluster: ha-006000
	I1003 20:35:44.262901    3498 notify.go:220] Checking for updates...
	I1003 20:35:44.263179    3498 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:35:44.263190    3498 status.go:174] checking status of ha-006000 ...
	I1003 20:35:44.263492    3498 status.go:371] ha-006000 host status = "Stopped" (err=<nil>)
	I1003 20:35:44.263497    3498 status.go:384] host is not running, skipping remaining checks
	I1003 20:35:44.263499    3498 status.go:176] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:35:44.263510    3498 status.go:174] checking status of ha-006000-m02 ...
	I1003 20:35:44.263634    3498 status.go:371] ha-006000-m02 host status = "Stopped" (err=<nil>)
	I1003 20:35:44.263639    3498 status.go:384] host is not running, skipping remaining checks
	I1003 20:35:44.263641    3498 status.go:176] ha-006000-m02 status: &{Name:ha-006000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:35:44.263646    3498 status.go:174] checking status of ha-006000-m03 ...
	I1003 20:35:44.263767    3498 status.go:371] ha-006000-m03 host status = "Stopped" (err=<nil>)
	I1003 20:35:44.263772    3498 status.go:384] host is not running, skipping remaining checks
	I1003 20:35:44.263774    3498 status.go:176] ha-006000-m03 status: &{Name:ha-006000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 20:35:44.263778    3498 status.go:174] checking status of ha-006000-m04 ...
	I1003 20:35:44.263895    3498 status.go:371] ha-006000-m04 host status = "Stopped" (err=<nil>)
	I1003 20:35:44.263899    3498 status.go:384] host is not running, skipping remaining checks
	I1003 20:35:44.263901    3498 status.go:176] ha-006000-m04 status: &{Name:ha-006000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": ha-006000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-006000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-006000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-006000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": ha-006000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-006000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-006000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-006000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": ha-006000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-006000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-006000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-006000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (32.753917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (300.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-006000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-006000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.186219667s)

                                                
                                                
-- stdout --
	* [ha-006000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-006000" primary control-plane node in "ha-006000" cluster
	* Restarting existing qemu2 VM for "ha-006000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-006000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:35:44.326920    3502 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:35:44.327057    3502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:44.327060    3502 out.go:358] Setting ErrFile to fd 2...
	I1003 20:35:44.327063    3502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:44.327189    3502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:35:44.328275    3502 out.go:352] Setting JSON to false
	I1003 20:35:44.345751    3502 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3915,"bootTime":1728009029,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:35:44.345822    3502 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:35:44.351304    3502 out.go:177] * [ha-006000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:35:44.359177    3502 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:35:44.359207    3502 notify.go:220] Checking for updates...
	I1003 20:35:44.365153    3502 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:35:44.368277    3502 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:35:44.369484    3502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:35:44.372217    3502 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:35:44.375198    3502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:35:44.378551    3502 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:35:44.378812    3502 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:35:44.383173    3502 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:35:44.390194    3502 start.go:297] selected driver: qemu2
	I1003 20:35:44.390202    3502 start.go:901] validating driver "qemu2" against &{Name:ha-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:ha-006000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:44.390293    3502 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:35:44.392785    3502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:35:44.392808    3502 cni.go:84] Creating CNI manager for ""
	I1003 20:35:44.392840    3502 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1003 20:35:44.392879    3502 start.go:340] cluster config:
	{Name:ha-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-006000 Namespace:default APIServerHAVIP:192.168.
105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fals
e inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:35:44.397496    3502 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:35:44.405196    3502 out.go:177] * Starting "ha-006000" primary control-plane node in "ha-006000" cluster
	I1003 20:35:44.409202    3502 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:35:44.409216    3502 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:35:44.409225    3502 cache.go:56] Caching tarball of preloaded images
	I1003 20:35:44.409300    3502 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:35:44.409306    3502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:35:44.409392    3502 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/ha-006000/config.json ...
	I1003 20:35:44.409824    3502 start.go:360] acquireMachinesLock for ha-006000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:35:44.409872    3502 start.go:364] duration metric: took 42.625µs to acquireMachinesLock for "ha-006000"
	I1003 20:35:44.409885    3502 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:35:44.409890    3502 fix.go:54] fixHost starting: 
	I1003 20:35:44.410008    3502 fix.go:112] recreateIfNeeded on ha-006000: state=Stopped err=<nil>
	W1003 20:35:44.410020    3502 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:35:44.414226    3502 out.go:177] * Restarting existing qemu2 VM for "ha-006000" ...
	I1003 20:35:44.422152    3502 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:35:44.422191    3502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:46:7f:ca:71:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/disk.qcow2
	I1003 20:35:44.424511    3502 main.go:141] libmachine: STDOUT: 
	I1003 20:35:44.424536    3502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:35:44.424569    3502 fix.go:56] duration metric: took 14.678541ms for fixHost
	I1003 20:35:44.424573    3502 start.go:83] releasing machines lock for "ha-006000", held for 14.696959ms
	W1003 20:35:44.424581    3502 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:35:44.424632    3502 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:35:44.424637    3502 start.go:729] Will try again in 5 seconds ...
	I1003 20:35:49.426767    3502 start.go:360] acquireMachinesLock for ha-006000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:35:49.427158    3502 start.go:364] duration metric: took 319.375µs to acquireMachinesLock for "ha-006000"
	I1003 20:35:49.427274    3502 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:35:49.427292    3502 fix.go:54] fixHost starting: 
	I1003 20:35:49.428011    3502 fix.go:112] recreateIfNeeded on ha-006000: state=Stopped err=<nil>
	W1003 20:35:49.428041    3502 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:35:49.432560    3502 out.go:177] * Restarting existing qemu2 VM for "ha-006000" ...
	I1003 20:35:49.440415    3502 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:35:49.440609    3502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:46:7f:ca:71:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/ha-006000/disk.qcow2
	I1003 20:35:49.450305    3502 main.go:141] libmachine: STDOUT: 
	I1003 20:35:49.450363    3502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:35:49.450434    3502 fix.go:56] duration metric: took 23.138583ms for fixHost
	I1003 20:35:49.450449    3502 start.go:83] releasing machines lock for "ha-006000", held for 23.265833ms
	W1003 20:35:49.450655    3502 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:35:49.458375    3502 out.go:201] 
	W1003 20:35:49.462496    3502 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:35:49.462534    3502 out.go:270] * 
	* 
	W1003 20:35:49.465044    3502 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:35:49.476356    3502 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-006000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (72.830667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-006000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,
\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"log
viewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP
\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (31.1355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-006000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-006000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.416875ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-006000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-006000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:35:49.672698    3517 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:35:49.672898    3517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:49.672902    3517 out.go:358] Setting ErrFile to fd 2...
	I1003 20:35:49.672904    3517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:35:49.673032    3517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:35:49.673261    3517 mustload.go:65] Loading cluster: ha-006000
	I1003 20:35:49.673481    3517 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W1003 20:35:49.673782    3517 out.go:270] ! The control-plane node ha-006000 host is not running (will try others): state=Stopped
	! The control-plane node ha-006000 host is not running (will try others): state=Stopped
	W1003 20:35:49.673885    3517 out.go:270] ! The control-plane node ha-006000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-006000-m02 host is not running (will try others): state=Stopped
	I1003 20:35:49.677024    3517 out.go:177] * The control-plane node ha-006000-m03 host is not running: state=Stopped
	I1003 20:35:49.680818    3517 out.go:177]   To start a cluster, run: "minikube start -p ha-006000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-006000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (31.433541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:309: expected profile "ha-006000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logvie
wer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":
\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (31.550209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-728000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-728000 --driver=qemu2 : exit status 80 (9.949171s)

                                                
                                                
-- stdout --
	* [image-728000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-728000" primary control-plane node in "image-728000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-728000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-728000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-728000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-728000 -n image-728000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-728000 -n image-728000: exit status 7 (69.795292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-728000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-297000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-297000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.668373125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e739d8ea-bf8e-4fe5-ac7c-0a8cdf535662","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-297000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"33f6d02c-adaf-4c61-9da3-a023aa255785","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19546"}}
	{"specversion":"1.0","id":"d90f4723-93f0-45c8-a202-dd383c8212fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig"}}
	{"specversion":"1.0","id":"2a7b1f16-bd64-4d22-84de-01db41c15431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c95f4037-448f-4146-a3b3-21814f3cb33d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a33dad61-32d1-4cac-841e-23288c7c053a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube"}}
	{"specversion":"1.0","id":"9abe88b7-1ab8-453c-8ee8-fe75855889ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0a2da89f-09d7-4606-8cb2-2c8eb1bdc4a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4987a2fa-cfad-410c-a3f6-bf2bdd2fe22a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"479c3dbc-e039-4288-84a7-782f7cb0151b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-297000\" primary control-plane node in \"json-output-297000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8278f6a3-c710-4080-916b-c1bb34fd3bb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"fd60b45d-a686-44d4-95c8-03175b2c4d62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-297000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"4fb5d7c3-19fd-4f3e-b4bb-fb33b97f8115","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"ce0ba5ee-7722-46be-8d48-e7167af7cb7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"50363a9b-f212-4e9f-8943-79680a9b50bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-297000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"5f4242dc-eab2-4e19-93a1-1f95b735d5a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"787e7d4f-ade5-4e6f-ae4e-f890a1eb67d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-297000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-297000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-297000 --output=json --user=testUser: exit status 83 (76.7735ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5a77905-bfbc-4c4b-aad1-af899ee72318","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-297000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"69b7f673-5dcf-46bc-9ece-1eada96ae611","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-297000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-297000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-297000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-297000 --output=json --user=testUser: exit status 83 (43.951334ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-297000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-297000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-297000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-297000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-050000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-050000 --driver=qemu2 : exit status 80 (9.929462125s)

                                                
                                                
-- stdout --
	* [first-050000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-050000" primary control-plane node in "first-050000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-050000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-050000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-050000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-03 20:36:24.006145 -0700 PDT m=+2936.725689376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-051000 -n second-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-051000 -n second-051000: exit status 85 (79.207917ms)

                                                
                                                
-- stdout --
	* Profile "second-051000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-051000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-051000" host is not running, skipping log retrieval (state="* Profile \"second-051000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-051000\"")
helpers_test.go:175: Cleaning up "second-051000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-051000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-03 20:36:24.201383 -0700 PDT m=+2936.920927209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-050000 -n first-050000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-050000 -n first-050000: exit status 7 (31.356209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-050000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-050000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-050000
--- FAIL: TestMinikubeProfile (10.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-362000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-362000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.43761975s)

                                                
                                                
-- stdout --
	* [mount-start-1-362000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-362000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-362000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-362000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-362000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-362000 -n mount-start-1-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-362000 -n mount-start-1-362000: exit status 7 (70.314666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.51s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-817000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-817000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.766791667s)

                                                
                                                
-- stdout --
	* [multinode-817000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-817000" primary control-plane node in "multinode-817000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-817000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:36:35.022185    3662 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:36:35.022340    3662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:36:35.022343    3662 out.go:358] Setting ErrFile to fd 2...
	I1003 20:36:35.022345    3662 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:36:35.022466    3662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:36:35.023636    3662 out.go:352] Setting JSON to false
	I1003 20:36:35.041289    3662 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3966,"bootTime":1728009029,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:36:35.041364    3662 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:36:35.047156    3662 out.go:177] * [multinode-817000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:36:35.054063    3662 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:36:35.054117    3662 notify.go:220] Checking for updates...
	I1003 20:36:35.061076    3662 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:36:35.064113    3662 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:36:35.067119    3662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:36:35.070110    3662 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:36:35.073129    3662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:36:35.076289    3662 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:36:35.080135    3662 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:36:35.087029    3662 start.go:297] selected driver: qemu2
	I1003 20:36:35.087035    3662 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:36:35.087040    3662 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:36:35.089588    3662 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:36:35.093098    3662 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:36:35.096085    3662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:36:35.096101    3662 cni.go:84] Creating CNI manager for ""
	I1003 20:36:35.096128    3662 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1003 20:36:35.096132    3662 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:36:35.096159    3662 start.go:340] cluster config:
	{Name:multinode-817000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-817000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_v
mnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:36:35.100943    3662 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:36:35.108933    3662 out.go:177] * Starting "multinode-817000" primary control-plane node in "multinode-817000" cluster
	I1003 20:36:35.113069    3662 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:36:35.113087    3662 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:36:35.113104    3662 cache.go:56] Caching tarball of preloaded images
	I1003 20:36:35.113198    3662 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:36:35.113204    3662 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:36:35.113424    3662 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/multinode-817000/config.json ...
	I1003 20:36:35.113434    3662 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/multinode-817000/config.json: {Name:mk7311a6e3629f9a2749733ef8fb94f5448144ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:36:35.113728    3662 start.go:360] acquireMachinesLock for multinode-817000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:36:35.113776    3662 start.go:364] duration metric: took 42.417µs to acquireMachinesLock for "multinode-817000"
	I1003 20:36:35.113787    3662 start.go:93] Provisioning new machine with config: &{Name:multinode-817000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-817000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:36:35.113826    3662 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:36:35.116991    3662 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:36:35.134159    3662 start.go:159] libmachine.API.Create for "multinode-817000" (driver="qemu2")
	I1003 20:36:35.134183    3662 client.go:168] LocalClient.Create starting
	I1003 20:36:35.134249    3662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:36:35.134287    3662 main.go:141] libmachine: Decoding PEM data...
	I1003 20:36:35.134295    3662 main.go:141] libmachine: Parsing certificate...
	I1003 20:36:35.134339    3662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:36:35.134367    3662 main.go:141] libmachine: Decoding PEM data...
	I1003 20:36:35.134374    3662 main.go:141] libmachine: Parsing certificate...
	I1003 20:36:35.134761    3662 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:36:35.258614    3662 main.go:141] libmachine: Creating SSH key...
	I1003 20:36:35.318114    3662 main.go:141] libmachine: Creating Disk image...
	I1003 20:36:35.318120    3662 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:36:35.318325    3662 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2
	I1003 20:36:35.328184    3662 main.go:141] libmachine: STDOUT: 
	I1003 20:36:35.328208    3662 main.go:141] libmachine: STDERR: 
	I1003 20:36:35.328259    3662 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2 +20000M
	I1003 20:36:35.336552    3662 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:36:35.336574    3662 main.go:141] libmachine: STDERR: 
	I1003 20:36:35.336586    3662 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2
	I1003 20:36:35.336591    3662 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:36:35.336605    3662 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:36:35.336630    3662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:4f:7d:63:de:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2
	I1003 20:36:35.338416    3662 main.go:141] libmachine: STDOUT: 
	I1003 20:36:35.338436    3662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:36:35.338456    3662 client.go:171] duration metric: took 204.26725ms to LocalClient.Create
	I1003 20:36:37.340685    3662 start.go:128] duration metric: took 2.226830292s to createHost
	I1003 20:36:37.340732    3662 start.go:83] releasing machines lock for "multinode-817000", held for 2.226946s
	W1003 20:36:37.340797    3662 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:36:37.353863    3662 out.go:177] * Deleting "multinode-817000" in qemu2 ...
	W1003 20:36:37.374990    3662 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:36:37.375017    3662 start.go:729] Will try again in 5 seconds ...
	I1003 20:36:42.377216    3662 start.go:360] acquireMachinesLock for multinode-817000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:36:42.377833    3662 start.go:364] duration metric: took 501.458µs to acquireMachinesLock for "multinode-817000"
	I1003 20:36:42.377956    3662 start.go:93] Provisioning new machine with config: &{Name:multinode-817000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-817000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:36:42.378208    3662 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:36:42.390886    3662 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:36:42.438300    3662 start.go:159] libmachine.API.Create for "multinode-817000" (driver="qemu2")
	I1003 20:36:42.438381    3662 client.go:168] LocalClient.Create starting
	I1003 20:36:42.438589    3662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:36:42.438676    3662 main.go:141] libmachine: Decoding PEM data...
	I1003 20:36:42.438697    3662 main.go:141] libmachine: Parsing certificate...
	I1003 20:36:42.438781    3662 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:36:42.438838    3662 main.go:141] libmachine: Decoding PEM data...
	I1003 20:36:42.438852    3662 main.go:141] libmachine: Parsing certificate...
	I1003 20:36:42.439460    3662 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:36:42.577248    3662 main.go:141] libmachine: Creating SSH key...
	I1003 20:36:42.696775    3662 main.go:141] libmachine: Creating Disk image...
	I1003 20:36:42.696781    3662 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:36:42.696981    3662 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2
	I1003 20:36:42.706821    3662 main.go:141] libmachine: STDOUT: 
	I1003 20:36:42.706847    3662 main.go:141] libmachine: STDERR: 
	I1003 20:36:42.706903    3662 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2 +20000M
	I1003 20:36:42.715272    3662 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:36:42.715287    3662 main.go:141] libmachine: STDERR: 
	I1003 20:36:42.715305    3662 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2
	I1003 20:36:42.715309    3662 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:36:42.715318    3662 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:36:42.715343    3662 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b8:ea:1f:36:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2
	I1003 20:36:42.717078    3662 main.go:141] libmachine: STDOUT: 
	I1003 20:36:42.717092    3662 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:36:42.717102    3662 client.go:171] duration metric: took 278.703958ms to LocalClient.Create
	I1003 20:36:44.719276    3662 start.go:128] duration metric: took 2.341038875s to createHost
	I1003 20:36:44.719337    3662 start.go:83] releasing machines lock for "multinode-817000", held for 2.341471667s
	W1003 20:36:44.719740    3662 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-817000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-817000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:36:44.729485    3662 out.go:201] 
	W1003 20:36:44.733485    3662 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:36:44.733508    3662 out.go:270] * 
	* 
	W1003 20:36:44.736120    3662 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:36:44.746410    3662 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-817000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (68.571542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (88.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (129.533166ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-817000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- rollout status deployment/busybox: exit status 1 (59.588416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.896791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 20:36:45.079523    1556 retry.go:31] will retry after 595.900221ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.305625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 20:36:45.782047    1556 retry.go:31] will retry after 922.709645ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.686667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 20:36:46.812768    1556 retry.go:31] will retry after 1.226108107s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.430833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 20:36:48.148664    1556 retry.go:31] will retry after 3.605102285s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.480625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 20:36:51.863586    1556 retry.go:31] will retry after 6.896414954s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.826167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 20:36:58.868270    1556 retry.go:31] will retry after 10.330832649s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.039583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 20:37:09.308482    1556 retry.go:31] will retry after 10.423003122s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.059417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 20:37:19.837084    1556 retry.go:31] will retry after 14.856639022s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.793625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1003 20:37:34.801867    1556 retry.go:31] will retry after 38.261233099s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1003 20:37:38.539638    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:37:51.694533    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.381333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.295167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.549917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.479792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.504833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (31.645042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (88.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-817000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.734291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (31.217458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-817000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-817000 -v 3 --alsologtostderr: exit status 83 (43.405083ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-817000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-817000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:13.556050    3742 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:13.556222    3742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:13.556225    3742 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:13.556228    3742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:13.556346    3742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:13.556567    3742 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:13.556769    3742 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:13.561032    3742 out.go:177] * The control-plane node multinode-817000 host is not running: state=Stopped
	I1003 20:38:13.564987    3742 out.go:177]   To start a cluster, run: "minikube start -p multinode-817000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-817000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (31.43925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-817000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-817000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.189584ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-817000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-817000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-817000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (31.91575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-817000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-817000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-817000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVM
NUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-817000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesV
ersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\
":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (31.901583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status --output json --alsologtostderr: exit status 7 (31.448416ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-817000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:13.773801    3754 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:13.773990    3754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:13.773993    3754 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:13.773996    3754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:13.774122    3754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:13.774254    3754 out.go:352] Setting JSON to true
	I1003 20:38:13.774265    3754 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:13.774312    3754 notify.go:220] Checking for updates...
	I1003 20:38:13.774492    3754 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:13.774505    3754 status.go:174] checking status of multinode-817000 ...
	I1003 20:38:13.774743    3754 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:38:13.774747    3754 status.go:384] host is not running, skipping remaining checks
	I1003 20:38:13.774749    3754 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-817000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (31.855208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 node stop m03: exit status 85 (48.662208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-817000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status: exit status 7 (31.3425ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status --alsologtostderr: exit status 7 (31.359334ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:13.917974    3762 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:13.918152    3762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:13.918155    3762 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:13.918157    3762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:13.918308    3762 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:13.918422    3762 out.go:352] Setting JSON to false
	I1003 20:38:13.918435    3762 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:13.918480    3762 notify.go:220] Checking for updates...
	I1003 20:38:13.918658    3762 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:13.918667    3762 status.go:174] checking status of multinode-817000 ...
	I1003 20:38:13.918891    3762 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:38:13.918895    3762 status.go:384] host is not running, skipping remaining checks
	I1003 20:38:13.918897    3762 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-817000 status --alsologtostderr": multinode-817000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (31.499583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.192917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:13.980902    3766 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:13.981180    3766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:13.981183    3766 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:13.981186    3766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:13.981333    3766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:13.981599    3766 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:13.981789    3766 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:13.986037    3766 out.go:201] 
	W1003 20:38:13.989006    3766 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1003 20:38:13.989016    3766 out.go:270] * 
	* 
	W1003 20:38:13.990687    3766 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:38:13.993884    3766 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1003 20:38:13.980902    3766 out.go:345] Setting OutFile to fd 1 ...
I1003 20:38:13.981180    3766 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:38:13.981183    3766 out.go:358] Setting ErrFile to fd 2...
I1003 20:38:13.981186    3766 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:38:13.981333    3766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
I1003 20:38:13.981599    3766 mustload.go:65] Loading cluster: multinode-817000
I1003 20:38:13.981789    3766 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:38:13.986037    3766 out.go:201] 
W1003 20:38:13.989006    3766 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1003 20:38:13.989016    3766 out.go:270] * 
* 
W1003 20:38:13.990687    3766 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1003 20:38:13.993884    3766 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-817000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr: exit status 7 (31.764042ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:14.028849    3768 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:14.029063    3768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:14.029066    3768 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:14.029068    3768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:14.029203    3768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:14.029324    3768 out.go:352] Setting JSON to false
	I1003 20:38:14.029335    3768 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:14.029394    3768 notify.go:220] Checking for updates...
	I1003 20:38:14.029522    3768 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:14.029531    3768 status.go:174] checking status of multinode-817000 ...
	I1003 20:38:14.029791    3768 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:38:14.029795    3768 status.go:384] host is not running, skipping remaining checks
	I1003 20:38:14.029797    3768 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:38:14.030678    1556 retry.go:31] will retry after 994.052815ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr: exit status 7 (74.981666ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:15.100038    3770 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:15.100305    3770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:15.100309    3770 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:15.100312    3770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:15.100477    3770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:15.100632    3770 out.go:352] Setting JSON to false
	I1003 20:38:15.100649    3770 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:15.100681    3770 notify.go:220] Checking for updates...
	I1003 20:38:15.100924    3770 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:15.100945    3770 status.go:174] checking status of multinode-817000 ...
	I1003 20:38:15.101241    3770 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:38:15.101245    3770 status.go:384] host is not running, skipping remaining checks
	I1003 20:38:15.101248    3770 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:38:15.102239    1556 retry.go:31] will retry after 2.228634214s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr: exit status 7 (75.004834ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:17.405796    3772 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:17.406022    3772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:17.406026    3772 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:17.406029    3772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:17.406218    3772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:17.406375    3772 out.go:352] Setting JSON to false
	I1003 20:38:17.406389    3772 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:17.406429    3772 notify.go:220] Checking for updates...
	I1003 20:38:17.406648    3772 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:17.406660    3772 status.go:174] checking status of multinode-817000 ...
	I1003 20:38:17.406965    3772 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:38:17.406970    3772 status.go:384] host is not running, skipping remaining checks
	I1003 20:38:17.406972    3772 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:38:17.407915    1556 retry.go:31] will retry after 3.290389181s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr: exit status 7 (74.505125ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:20.773014    3776 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:20.773249    3776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:20.773253    3776 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:20.773257    3776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:20.773427    3776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:20.773590    3776 out.go:352] Setting JSON to false
	I1003 20:38:20.773607    3776 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:20.773644    3776 notify.go:220] Checking for updates...
	I1003 20:38:20.773872    3776 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:20.773886    3776 status.go:174] checking status of multinode-817000 ...
	I1003 20:38:20.774206    3776 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:38:20.774211    3776 status.go:384] host is not running, skipping remaining checks
	I1003 20:38:20.774213    3776 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:38:20.775265    1556 retry.go:31] will retry after 2.043270456s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr: exit status 7 (75.617084ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:22.894191    3778 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:22.894442    3778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:22.894446    3778 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:22.894450    3778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:22.894618    3778 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:22.894782    3778 out.go:352] Setting JSON to false
	I1003 20:38:22.894799    3778 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:22.894839    3778 notify.go:220] Checking for updates...
	I1003 20:38:22.895115    3778 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:22.895127    3778 status.go:174] checking status of multinode-817000 ...
	I1003 20:38:22.895436    3778 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:38:22.895442    3778 status.go:384] host is not running, skipping remaining checks
	I1003 20:38:22.895444    3778 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:38:22.896509    1556 retry.go:31] will retry after 2.5389965s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr: exit status 7 (75.5685ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:25.511209    3780 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:25.511483    3780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:25.511488    3780 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:25.511491    3780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:25.511665    3780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:25.511818    3780 out.go:352] Setting JSON to false
	I1003 20:38:25.511833    3780 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:25.511880    3780 notify.go:220] Checking for updates...
	I1003 20:38:25.512090    3780 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:25.512102    3780 status.go:174] checking status of multinode-817000 ...
	I1003 20:38:25.512424    3780 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:38:25.512428    3780 status.go:384] host is not running, skipping remaining checks
	I1003 20:38:25.512431    3780 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:38:25.513448    1556 retry.go:31] will retry after 9.295361687s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr: exit status 7 (73.586708ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:34.882503    3782 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:34.882754    3782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:34.882759    3782 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:34.882762    3782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:34.882931    3782 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:34.883097    3782 out.go:352] Setting JSON to false
	I1003 20:38:34.883111    3782 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:34.883144    3782 notify.go:220] Checking for updates...
	I1003 20:38:34.883377    3782 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:34.883389    3782 status.go:174] checking status of multinode-817000 ...
	I1003 20:38:34.883705    3782 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:38:34.883709    3782 status.go:384] host is not running, skipping remaining checks
	I1003 20:38:34.883712    3782 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:38:34.884773    1556 retry.go:31] will retry after 11.028273871s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr: exit status 7 (75.22175ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:45.988614    3784 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:45.988867    3784 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:45.988871    3784 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:45.988875    3784 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:45.989052    3784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:45.989206    3784 out.go:352] Setting JSON to false
	I1003 20:38:45.989221    3784 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:45.989273    3784 notify.go:220] Checking for updates...
	I1003 20:38:45.989508    3784 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:45.989519    3784 status.go:174] checking status of multinode-817000 ...
	I1003 20:38:45.989823    3784 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:38:45.989827    3784 status.go:384] host is not running, skipping remaining checks
	I1003 20:38:45.989830    3784 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1003 20:38:45.990769    1556 retry.go:31] will retry after 8.769247597s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr: exit status 7 (73.390916ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:54.833533    3786 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:54.833786    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:54.833791    3786 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:54.833794    3786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:54.833988    3786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:54.834162    3786 out.go:352] Setting JSON to false
	I1003 20:38:54.834175    3786 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:38:54.834210    3786 notify.go:220] Checking for updates...
	I1003 20:38:54.834434    3786 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:54.834452    3786 status.go:174] checking status of multinode-817000 ...
	I1003 20:38:54.834760    3786 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:38:54.834764    3786 status.go:384] host is not running, skipping remaining checks
	I1003 20:38:54.834767    3786 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-817000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (34.619167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (40.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-817000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-817000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-817000: (3.811018041s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-817000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-817000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.220937625s)

                                                
                                                
-- stdout --
	* [multinode-817000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-817000" primary control-plane node in "multinode-817000" cluster
	* Restarting existing qemu2 VM for "multinode-817000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-817000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:38:58.775679    3812 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:38:58.775871    3812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:58.775875    3812 out.go:358] Setting ErrFile to fd 2...
	I1003 20:38:58.775879    3812 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:38:58.776068    3812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:38:58.777310    3812 out.go:352] Setting JSON to false
	I1003 20:38:58.797201    3812 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4109,"bootTime":1728009029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:38:58.797271    3812 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:38:58.802382    3812 out.go:177] * [multinode-817000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:38:58.809250    3812 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:38:58.809280    3812 notify.go:220] Checking for updates...
	I1003 20:38:58.816260    3812 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:38:58.819236    3812 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:38:58.822279    3812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:38:58.825185    3812 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:38:58.828279    3812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:38:58.831612    3812 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:38:58.831669    3812 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:38:58.835225    3812 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:38:58.842264    3812 start.go:297] selected driver: qemu2
	I1003 20:38:58.842271    3812 start.go:901] validating driver "qemu2" against &{Name:multinode-817000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:multinode-817000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:38:58.842330    3812 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:38:58.844799    3812 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:38:58.844822    3812 cni.go:84] Creating CNI manager for ""
	I1003 20:38:58.844845    3812 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:38:58.844886    3812 start.go:340] cluster config:
	{Name:multinode-817000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-817000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:38:58.849661    3812 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:38:58.857244    3812 out.go:177] * Starting "multinode-817000" primary control-plane node in "multinode-817000" cluster
	I1003 20:38:58.861278    3812 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:38:58.861294    3812 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:38:58.861303    3812 cache.go:56] Caching tarball of preloaded images
	I1003 20:38:58.861397    3812 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:38:58.861403    3812 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:38:58.861479    3812 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/multinode-817000/config.json ...
	I1003 20:38:58.861871    3812 start.go:360] acquireMachinesLock for multinode-817000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:38:58.861924    3812 start.go:364] duration metric: took 46.541µs to acquireMachinesLock for "multinode-817000"
	I1003 20:38:58.861933    3812 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:38:58.861937    3812 fix.go:54] fixHost starting: 
	I1003 20:38:58.862076    3812 fix.go:112] recreateIfNeeded on multinode-817000: state=Stopped err=<nil>
	W1003 20:38:58.862085    3812 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:38:58.866288    3812 out.go:177] * Restarting existing qemu2 VM for "multinode-817000" ...
	I1003 20:38:58.874211    3812 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:38:58.874254    3812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b8:ea:1f:36:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2
	I1003 20:38:58.876854    3812 main.go:141] libmachine: STDOUT: 
	I1003 20:38:58.876873    3812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:38:58.876909    3812 fix.go:56] duration metric: took 14.969083ms for fixHost
	I1003 20:38:58.876913    3812 start.go:83] releasing machines lock for "multinode-817000", held for 14.984542ms
	W1003 20:38:58.876919    3812 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:38:58.876967    3812 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:38:58.876973    3812 start.go:729] Will try again in 5 seconds ...
	I1003 20:39:03.879174    3812 start.go:360] acquireMachinesLock for multinode-817000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:39:03.879569    3812 start.go:364] duration metric: took 305.875µs to acquireMachinesLock for "multinode-817000"
	I1003 20:39:03.879661    3812 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:39:03.879683    3812 fix.go:54] fixHost starting: 
	I1003 20:39:03.880388    3812 fix.go:112] recreateIfNeeded on multinode-817000: state=Stopped err=<nil>
	W1003 20:39:03.880419    3812 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:39:03.884828    3812 out.go:177] * Restarting existing qemu2 VM for "multinode-817000" ...
	I1003 20:39:03.888817    3812 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:39:03.889017    3812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b8:ea:1f:36:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2
	I1003 20:39:03.899045    3812 main.go:141] libmachine: STDOUT: 
	I1003 20:39:03.899114    3812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:39:03.899172    3812 fix.go:56] duration metric: took 19.493167ms for fixHost
	I1003 20:39:03.899184    3812 start.go:83] releasing machines lock for "multinode-817000", held for 19.595333ms
	W1003 20:39:03.899345    3812 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-817000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-817000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:39:03.905778    3812 out.go:201] 
	W1003 20:39:03.909826    3812 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:39:03.909861    3812 out.go:270] * 
	* 
	W1003 20:39:03.912581    3812 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:39:03.920737    3812 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-817000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-817000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (33.3295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 node delete m03: exit status 83 (39.38175ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-817000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-817000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-817000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status --alsologtostderr: exit status 7 (31.152166ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:39:04.107899    3826 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:39:04.108111    3826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:39:04.108114    3826 out.go:358] Setting ErrFile to fd 2...
	I1003 20:39:04.108117    3826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:39:04.108251    3826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:39:04.108375    3826 out.go:352] Setting JSON to false
	I1003 20:39:04.108386    3826 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:39:04.108460    3826 notify.go:220] Checking for updates...
	I1003 20:39:04.108588    3826 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:39:04.108598    3826 status.go:174] checking status of multinode-817000 ...
	I1003 20:39:04.108825    3826 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:39:04.108829    3826 status.go:384] host is not running, skipping remaining checks
	I1003 20:39:04.108831    3826 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-817000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (31.252291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (1.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-817000 stop: (1.8224125s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status: exit status 7 (67.771625ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-817000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-817000 status --alsologtostderr: exit status 7 (33.44775ms)

                                                
                                                
-- stdout --
	multinode-817000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:39:06.063461    3842 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:39:06.063653    3842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:39:06.063656    3842 out.go:358] Setting ErrFile to fd 2...
	I1003 20:39:06.063658    3842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:39:06.063828    3842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:39:06.063961    3842 out.go:352] Setting JSON to false
	I1003 20:39:06.063971    3842 mustload.go:65] Loading cluster: multinode-817000
	I1003 20:39:06.064055    3842 notify.go:220] Checking for updates...
	I1003 20:39:06.064181    3842 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:39:06.064192    3842 status.go:174] checking status of multinode-817000 ...
	I1003 20:39:06.064426    3842 status.go:371] multinode-817000 host status = "Stopped" (err=<nil>)
	I1003 20:39:06.064430    3842 status.go:384] host is not running, skipping remaining checks
	I1003 20:39:06.064432    3842 status.go:176] multinode-817000 status: &{Name:multinode-817000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-817000 status --alsologtostderr": multinode-817000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-817000 status --alsologtostderr": multinode-817000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (30.801875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (1.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-817000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-817000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.193236667s)

                                                
                                                
-- stdout --
	* [multinode-817000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-817000" primary control-plane node in "multinode-817000" cluster
	* Restarting existing qemu2 VM for "multinode-817000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-817000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:39:06.124668    3846 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:39:06.124826    3846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:39:06.124830    3846 out.go:358] Setting ErrFile to fd 2...
	I1003 20:39:06.124832    3846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:39:06.124954    3846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:39:06.126022    3846 out.go:352] Setting JSON to false
	I1003 20:39:06.143674    3846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4117,"bootTime":1728009029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:39:06.143746    3846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:39:06.148610    3846 out.go:177] * [multinode-817000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:39:06.157582    3846 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:39:06.157611    3846 notify.go:220] Checking for updates...
	I1003 20:39:06.164534    3846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:39:06.167545    3846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:39:06.170560    3846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:39:06.173546    3846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:39:06.176587    3846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:39:06.179886    3846 config.go:182] Loaded profile config "multinode-817000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:39:06.180151    3846 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:39:06.184521    3846 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:39:06.191592    3846 start.go:297] selected driver: qemu2
	I1003 20:39:06.191599    3846 start.go:901] validating driver "qemu2" against &{Name:multinode-817000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:multinode-817000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:39:06.191673    3846 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:39:06.194280    3846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:39:06.194306    3846 cni.go:84] Creating CNI manager for ""
	I1003 20:39:06.194338    3846 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1003 20:39:06.194389    3846 start.go:340] cluster config:
	{Name:multinode-817000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-817000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:39:06.198791    3846 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:39:06.206527    3846 out.go:177] * Starting "multinode-817000" primary control-plane node in "multinode-817000" cluster
	I1003 20:39:06.209519    3846 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:39:06.209539    3846 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:39:06.209547    3846 cache.go:56] Caching tarball of preloaded images
	I1003 20:39:06.209638    3846 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:39:06.209643    3846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:39:06.209708    3846 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/multinode-817000/config.json ...
	I1003 20:39:06.210090    3846 start.go:360] acquireMachinesLock for multinode-817000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:39:06.210127    3846 start.go:364] duration metric: took 30.417µs to acquireMachinesLock for "multinode-817000"
	I1003 20:39:06.210136    3846 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:39:06.210141    3846 fix.go:54] fixHost starting: 
	I1003 20:39:06.210260    3846 fix.go:112] recreateIfNeeded on multinode-817000: state=Stopped err=<nil>
	W1003 20:39:06.210269    3846 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:39:06.214580    3846 out.go:177] * Restarting existing qemu2 VM for "multinode-817000" ...
	I1003 20:39:06.221543    3846 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:39:06.221604    3846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b8:ea:1f:36:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2
	I1003 20:39:06.223935    3846 main.go:141] libmachine: STDOUT: 
	I1003 20:39:06.223954    3846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:39:06.223982    3846 fix.go:56] duration metric: took 13.84075ms for fixHost
	I1003 20:39:06.223988    3846 start.go:83] releasing machines lock for "multinode-817000", held for 13.856583ms
	W1003 20:39:06.223996    3846 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:39:06.224030    3846 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:39:06.224035    3846 start.go:729] Will try again in 5 seconds ...
	I1003 20:39:11.226310    3846 start.go:360] acquireMachinesLock for multinode-817000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:39:11.226850    3846 start.go:364] duration metric: took 449.459µs to acquireMachinesLock for "multinode-817000"
	I1003 20:39:11.226986    3846 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:39:11.227003    3846 fix.go:54] fixHost starting: 
	I1003 20:39:11.227672    3846 fix.go:112] recreateIfNeeded on multinode-817000: state=Stopped err=<nil>
	W1003 20:39:11.227695    3846 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:39:11.232231    3846 out.go:177] * Restarting existing qemu2 VM for "multinode-817000" ...
	I1003 20:39:11.240145    3846 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:39:11.240343    3846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b8:ea:1f:36:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/multinode-817000/disk.qcow2
	I1003 20:39:11.249653    3846 main.go:141] libmachine: STDOUT: 
	I1003 20:39:11.249735    3846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:39:11.249814    3846 fix.go:56] duration metric: took 22.812958ms for fixHost
	I1003 20:39:11.249837    3846 start.go:83] releasing machines lock for "multinode-817000", held for 22.9645ms
	W1003 20:39:11.250041    3846 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-817000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-817000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:39:11.258111    3846 out.go:201] 
	W1003 20:39:11.262248    3846 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:39:11.262278    3846 out.go:270] * 
	* 
	W1003 20:39:11.264211    3846 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:39:11.277201    3846 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-817000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (70.680834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-817000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-817000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-817000-m01 --driver=qemu2 : exit status 80 (9.714099209s)

                                                
                                                
-- stdout --
	* [multinode-817000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-817000-m01" primary control-plane node in "multinode-817000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-817000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-817000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-817000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-817000-m02 --driver=qemu2 : exit status 80 (9.805444875s)

                                                
                                                
-- stdout --
	* [multinode-817000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-817000-m02" primary control-plane node in "multinode-817000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-817000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-817000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-817000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-817000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-817000: exit status 83 (83.77375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-817000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-817000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-817000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-817000 -n multinode-817000: exit status 7 (30.952334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.75s)

                                                
                                    
x
+
TestPreload (9.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-877000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-877000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.743186541s)

                                                
                                                
-- stdout --
	* [test-preload-877000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-877000" primary control-plane node in "test-preload-877000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-877000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:39:31.242835    3901 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:39:31.242974    3901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:39:31.242977    3901 out.go:358] Setting ErrFile to fd 2...
	I1003 20:39:31.242979    3901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:39:31.243104    3901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:39:31.244207    3901 out.go:352] Setting JSON to false
	I1003 20:39:31.261968    3901 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4142,"bootTime":1728009029,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:39:31.262034    3901 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:39:31.265996    3901 out.go:177] * [test-preload-877000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:39:31.273981    3901 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:39:31.274029    3901 notify.go:220] Checking for updates...
	I1003 20:39:31.279300    3901 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:39:31.281934    3901 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:39:31.284977    3901 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:39:31.287970    3901 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:39:31.291008    3901 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:39:31.294336    3901 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:39:31.294404    3901 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:39:31.298939    3901 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:39:31.305980    3901 start.go:297] selected driver: qemu2
	I1003 20:39:31.305987    3901 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:39:31.305993    3901 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:39:31.308619    3901 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:39:31.311963    3901 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:39:31.315003    3901 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:39:31.315019    3901 cni.go:84] Creating CNI manager for ""
	I1003 20:39:31.315038    3901 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:39:31.315043    3901 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:39:31.315071    3901 start.go:340] cluster config:
	{Name:test-preload-877000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:39:31.319741    3901 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:39:31.326935    3901 out.go:177] * Starting "test-preload-877000" primary control-plane node in "test-preload-877000" cluster
	I1003 20:39:31.330922    3901 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1003 20:39:31.331008    3901 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/test-preload-877000/config.json ...
	I1003 20:39:31.331033    3901 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/test-preload-877000/config.json: {Name:mka3cb6b09b26685fa5bbabfad302ca663b89c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:39:31.331055    3901 cache.go:107] acquiring lock: {Name:mk4ffe7ca6ed0a1363244dc2b9236fd0b2364712 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:39:31.331096    3901 cache.go:107] acquiring lock: {Name:mk7665aef646c1ad22233bf5f45b573c2ae75e90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:39:31.331120    3901 cache.go:107] acquiring lock: {Name:mk45998a03d9bbe6edcebed90a7d8a3da8aa2e81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:39:31.331216    3901 cache.go:107] acquiring lock: {Name:mke4d7802f3ba0a0a057fa76a5fd30e0d4c9c740 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:39:31.331249    3901 cache.go:107] acquiring lock: {Name:mka7c2b3f3dbc359020374649c29d0d4670ff400 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:39:31.331258    3901 cache.go:107] acquiring lock: {Name:mkeb1f6e390fab35a9b44b8e3c5da18cd1edb37f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:39:31.331217    3901 cache.go:107] acquiring lock: {Name:mk6775af5d9a988f04125f6a542c96628229d14d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:39:31.331296    3901 cache.go:107] acquiring lock: {Name:mkc52412151a70a3adc22048ec4dfbf1f4d70eb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:39:31.331478    3901 start.go:360] acquireMachinesLock for test-preload-877000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:39:31.331796    3901 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1003 20:39:31.331844    3901 start.go:364] duration metric: took 351.542µs to acquireMachinesLock for "test-preload-877000"
	I1003 20:39:31.331883    3901 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1003 20:39:31.331858    3901 start.go:93] Provisioning new machine with config: &{Name:test-preload-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.24.4 ClusterName:test-preload-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:39:31.331892    3901 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:39:31.331945    3901 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1003 20:39:31.331972    3901 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:39:31.332011    3901 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1003 20:39:31.332281    3901 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1003 20:39:31.332310    3901 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:39:31.332315    3901 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1003 20:39:31.334931    3901 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:39:31.343181    3901 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1003 20:39:31.343328    3901 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1003 20:39:31.343469    3901 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:39:31.343468    3901 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1003 20:39:31.343756    3901 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1003 20:39:31.343924    3901 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:39:31.343953    3901 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1003 20:39:31.343939    3901 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1003 20:39:31.352772    3901 start.go:159] libmachine.API.Create for "test-preload-877000" (driver="qemu2")
	I1003 20:39:31.352790    3901 client.go:168] LocalClient.Create starting
	I1003 20:39:31.352885    3901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:39:31.352929    3901 main.go:141] libmachine: Decoding PEM data...
	I1003 20:39:31.352946    3901 main.go:141] libmachine: Parsing certificate...
	I1003 20:39:31.352992    3901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:39:31.353023    3901 main.go:141] libmachine: Decoding PEM data...
	I1003 20:39:31.353033    3901 main.go:141] libmachine: Parsing certificate...
	I1003 20:39:31.353412    3901 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:39:31.483072    3901 main.go:141] libmachine: Creating SSH key...
	I1003 20:39:31.598982    3901 main.go:141] libmachine: Creating Disk image...
	I1003 20:39:31.599004    3901 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:39:31.599227    3901 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2
	I1003 20:39:31.608959    3901 main.go:141] libmachine: STDOUT: 
	I1003 20:39:31.608978    3901 main.go:141] libmachine: STDERR: 
	I1003 20:39:31.609031    3901 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2 +20000M
	I1003 20:39:31.618303    3901 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:39:31.618342    3901 main.go:141] libmachine: STDERR: 
	I1003 20:39:31.618362    3901 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2
	I1003 20:39:31.618367    3901 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:39:31.618380    3901 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:39:31.618404    3901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ac:ed:2a:6d:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2
	I1003 20:39:31.620939    3901 main.go:141] libmachine: STDOUT: 
	I1003 20:39:31.620957    3901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:39:31.620976    3901 client.go:171] duration metric: took 268.180167ms to LocalClient.Create
	I1003 20:39:33.353139    3901 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1003 20:39:33.507923    3901 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1003 20:39:33.510117    3901 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1003 20:39:33.523642    3901 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1003 20:39:33.622172    3901 start.go:128] duration metric: took 2.290263042s to createHost
	I1003 20:39:33.622217    3901 start.go:83] releasing machines lock for "test-preload-877000", held for 2.290363375s
	W1003 20:39:33.622278    3901 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:39:33.631119    3901 out.go:177] * Deleting "test-preload-877000" in qemu2 ...
	W1003 20:39:33.651778    3901 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:39:33.651810    3901 start.go:729] Will try again in 5 seconds ...
	I1003 20:39:34.059319    3901 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W1003 20:39:34.061523    3901 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1003 20:39:34.061627    3901 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1003 20:39:34.093364    3901 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1003 20:39:34.193230    3901 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1003 20:39:34.193298    3901 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.8620795s
	I1003 20:39:34.193339    3901 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1003 20:39:34.266687    3901 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1003 20:39:34.266777    3901 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1003 20:39:35.176415    3901 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1003 20:39:35.176466    3901 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.845209917s
	I1003 20:39:35.176494    3901 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1003 20:39:35.738051    3901 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1003 20:39:35.738106    3901 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.406911625s
	I1003 20:39:35.738132    3901 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1003 20:39:36.326952    3901 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1003 20:39:36.327028    3901 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.995969791s
	I1003 20:39:36.327072    3901 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1003 20:39:37.802642    3901 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1003 20:39:37.802689    3901 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.471514459s
	I1003 20:39:37.802715    3901 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1003 20:39:37.995535    3901 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1003 20:39:37.995582    3901 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.664491791s
	I1003 20:39:37.995608    3901 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1003 20:39:38.591858    3901 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1003 20:39:38.591909    3901 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.260823292s
	I1003 20:39:38.591936    3901 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1003 20:39:38.652809    3901 start.go:360] acquireMachinesLock for test-preload-877000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:39:38.653255    3901 start.go:364] duration metric: took 392µs to acquireMachinesLock for "test-preload-877000"
	I1003 20:39:38.653310    3901 start.go:93] Provisioning new machine with config: &{Name:test-preload-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.24.4 ClusterName:test-preload-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:39:38.653538    3901 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:39:38.660148    3901 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:39:38.708289    3901 start.go:159] libmachine.API.Create for "test-preload-877000" (driver="qemu2")
	I1003 20:39:38.708340    3901 client.go:168] LocalClient.Create starting
	I1003 20:39:38.708485    3901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:39:38.708563    3901 main.go:141] libmachine: Decoding PEM data...
	I1003 20:39:38.708585    3901 main.go:141] libmachine: Parsing certificate...
	I1003 20:39:38.708658    3901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:39:38.708717    3901 main.go:141] libmachine: Decoding PEM data...
	I1003 20:39:38.708737    3901 main.go:141] libmachine: Parsing certificate...
	I1003 20:39:38.709297    3901 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:39:38.847135    3901 main.go:141] libmachine: Creating SSH key...
	I1003 20:39:38.894741    3901 main.go:141] libmachine: Creating Disk image...
	I1003 20:39:38.894747    3901 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:39:38.894940    3901 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2
	I1003 20:39:38.905222    3901 main.go:141] libmachine: STDOUT: 
	I1003 20:39:38.905241    3901 main.go:141] libmachine: STDERR: 
	I1003 20:39:38.905316    3901 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2 +20000M
	I1003 20:39:38.914055    3901 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:39:38.914071    3901 main.go:141] libmachine: STDERR: 
	I1003 20:39:38.914084    3901 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2
	I1003 20:39:38.914090    3901 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:39:38.914101    3901 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:39:38.914145    3901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:ac:39:ee:5f:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/test-preload-877000/disk.qcow2
	I1003 20:39:38.916043    3901 main.go:141] libmachine: STDOUT: 
	I1003 20:39:38.916062    3901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:39:38.916075    3901 client.go:171] duration metric: took 207.729708ms to LocalClient.Create
	I1003 20:39:40.916275    3901 start.go:128] duration metric: took 2.262705042s to createHost
	I1003 20:39:40.916346    3901 start.go:83] releasing machines lock for "test-preload-877000", held for 2.263063375s
	W1003 20:39:40.916636    3901 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-877000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-877000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:39:40.927177    3901 out.go:201] 
	W1003 20:39:40.930132    3901 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:39:40.930161    3901 out.go:270] * 
	* 
	W1003 20:39:40.933062    3901 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:39:40.942105    3901 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-877000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-03 20:39:40.958966 -0700 PDT m=+3133.678491709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-877000 -n test-preload-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-877000 -n test-preload-877000: exit status 7 (70.135042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-877000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-877000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-877000
--- FAIL: TestPreload (9.89s)

                                                
                                    
x
+
TestScheduledStopUnix (10.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-199000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-199000 --memory=2048 --driver=qemu2 : exit status 80 (9.86917675s)

                                                
                                                
-- stdout --
	* [scheduled-stop-199000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-199000" primary control-plane node in "scheduled-stop-199000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-199000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-199000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-199000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-199000" primary control-plane node in "scheduled-stop-199000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-199000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-199000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-03 20:39:50.975165 -0700 PDT m=+3143.694690251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-199000 -n scheduled-stop-199000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-199000 -n scheduled-stop-199000: exit status 7 (70.8685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-199000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-199000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-199000
--- FAIL: TestScheduledStopUnix (10.02s)

                                                
                                    
x
+
TestSkaffold (15.96s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3483191666 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3483191666 version: (1.063903875s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-069000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-069000 --memory=2600 --driver=qemu2 : exit status 80 (9.70452625s)

                                                
                                                
-- stdout --
	* [skaffold-069000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-069000" primary control-plane node in "skaffold-069000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-069000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-069000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-069000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-069000" primary control-plane node in "skaffold-069000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-069000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-069000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-03 20:40:06.949295 -0700 PDT m=+3159.668818126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-069000 -n skaffold-069000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-069000 -n skaffold-069000: exit status 7 (63.274958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-069000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-069000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-069000
--- FAIL: TestSkaffold (15.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (621.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.578642766 start -p running-upgrade-902000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.578642766 start -p running-upgrade-902000 --memory=2200 --vm-driver=qemu2 : (1m20.834096042s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-902000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1003 20:42:38.539957    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:42:51.694261    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-902000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.563118666s)

                                                
                                                
-- stdout --
	* [running-upgrade-902000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-902000" primary control-plane node in "running-upgrade-902000" cluster
	* Updating the running qemu2 "running-upgrade-902000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:42:13.578052    4280 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:42:13.578191    4280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:42:13.578194    4280 out.go:358] Setting ErrFile to fd 2...
	I1003 20:42:13.578197    4280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:42:13.578325    4280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:42:13.579386    4280 out.go:352] Setting JSON to false
	I1003 20:42:13.598959    4280 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4304,"bootTime":1728009029,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:42:13.599034    4280 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:42:13.603019    4280 out.go:177] * [running-upgrade-902000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:42:13.609927    4280 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:42:13.610000    4280 notify.go:220] Checking for updates...
	I1003 20:42:13.617833    4280 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:42:13.621893    4280 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:42:13.624890    4280 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:42:13.627935    4280 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:42:13.630915    4280 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:42:13.634206    4280 config.go:182] Loaded profile config "running-upgrade-902000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:42:13.636913    4280 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1003 20:42:13.639889    4280 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:42:13.643953    4280 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:42:13.650912    4280 start.go:297] selected driver: qemu2
	I1003 20:42:13.650918    4280 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-902000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50280 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-902000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1003 20:42:13.650981    4280 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:42:13.653825    4280 cni.go:84] Creating CNI manager for ""
	I1003 20:42:13.653857    4280 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:42:13.653884    4280 start.go:340] cluster config:
	{Name:running-upgrade-902000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50280 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-902000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1003 20:42:13.653936    4280 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:42:13.662853    4280 out.go:177] * Starting "running-upgrade-902000" primary control-plane node in "running-upgrade-902000" cluster
	I1003 20:42:13.666954    4280 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1003 20:42:13.666990    4280 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1003 20:42:13.667000    4280 cache.go:56] Caching tarball of preloaded images
	I1003 20:42:13.667101    4280 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:42:13.667108    4280 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1003 20:42:13.667167    4280 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/config.json ...
	I1003 20:42:13.667497    4280 start.go:360] acquireMachinesLock for running-upgrade-902000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:42:13.667530    4280 start.go:364] duration metric: took 26.375µs to acquireMachinesLock for "running-upgrade-902000"
	I1003 20:42:13.667538    4280 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:42:13.667542    4280 fix.go:54] fixHost starting: 
	I1003 20:42:13.668165    4280 fix.go:112] recreateIfNeeded on running-upgrade-902000: state=Running err=<nil>
	W1003 20:42:13.668175    4280 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:42:13.672879    4280 out.go:177] * Updating the running qemu2 "running-upgrade-902000" VM ...
	I1003 20:42:13.679794    4280 machine.go:93] provisionDockerMachine start ...
	I1003 20:42:13.679845    4280 main.go:141] libmachine: Using SSH client type: native
	I1003 20:42:13.679969    4280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100be9c00] 0x100bec440 <nil>  [] 0s} localhost 50248 <nil> <nil>}
	I1003 20:42:13.679974    4280 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:42:13.744496    4280 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-902000
	
	I1003 20:42:13.744511    4280 buildroot.go:166] provisioning hostname "running-upgrade-902000"
	I1003 20:42:13.744572    4280 main.go:141] libmachine: Using SSH client type: native
	I1003 20:42:13.744690    4280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100be9c00] 0x100bec440 <nil>  [] 0s} localhost 50248 <nil> <nil>}
	I1003 20:42:13.744696    4280 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-902000 && echo "running-upgrade-902000" | sudo tee /etc/hostname
	I1003 20:42:13.809859    4280 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-902000
	
	I1003 20:42:13.809910    4280 main.go:141] libmachine: Using SSH client type: native
	I1003 20:42:13.810021    4280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100be9c00] 0x100bec440 <nil>  [] 0s} localhost 50248 <nil> <nil>}
	I1003 20:42:13.810029    4280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-902000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-902000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-902000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:42:13.868515    4280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:42:13.868528    4280 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1040/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1040/.minikube}
	I1003 20:42:13.868536    4280 buildroot.go:174] setting up certificates
	I1003 20:42:13.868545    4280 provision.go:84] configureAuth start
	I1003 20:42:13.868553    4280 provision.go:143] copyHostCerts
	I1003 20:42:13.868612    4280 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem, removing ...
	I1003 20:42:13.868624    4280 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem
	I1003 20:42:13.868769    4280 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem (1078 bytes)
	I1003 20:42:13.868951    4280 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem, removing ...
	I1003 20:42:13.868957    4280 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem
	I1003 20:42:13.868998    4280 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem (1123 bytes)
	I1003 20:42:13.869123    4280 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem, removing ...
	I1003 20:42:13.869126    4280 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem
	I1003 20:42:13.869167    4280 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem (1675 bytes)
	I1003 20:42:13.869259    4280 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-902000 san=[127.0.0.1 localhost minikube running-upgrade-902000]
	I1003 20:42:13.907344    4280 provision.go:177] copyRemoteCerts
	I1003 20:42:13.907383    4280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:42:13.907389    4280 sshutil.go:53] new ssh client: &{IP:localhost Port:50248 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/running-upgrade-902000/id_rsa Username:docker}
	I1003 20:42:13.941224    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1003 20:42:13.948150    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:42:13.955399    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1003 20:42:13.962991    4280 provision.go:87] duration metric: took 94.435875ms to configureAuth
	I1003 20:42:13.963000    4280 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:42:13.963109    4280 config.go:182] Loaded profile config "running-upgrade-902000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:42:13.963159    4280 main.go:141] libmachine: Using SSH client type: native
	I1003 20:42:13.963244    4280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100be9c00] 0x100bec440 <nil>  [] 0s} localhost 50248 <nil> <nil>}
	I1003 20:42:13.963249    4280 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:42:14.025199    4280 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:42:14.025207    4280 buildroot.go:70] root file system type: tmpfs
	I1003 20:42:14.025258    4280 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:42:14.025333    4280 main.go:141] libmachine: Using SSH client type: native
	I1003 20:42:14.025443    4280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100be9c00] 0x100bec440 <nil>  [] 0s} localhost 50248 <nil> <nil>}
	I1003 20:42:14.025475    4280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:42:14.089806    4280 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:42:14.089869    4280 main.go:141] libmachine: Using SSH client type: native
	I1003 20:42:14.089981    4280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100be9c00] 0x100bec440 <nil>  [] 0s} localhost 50248 <nil> <nil>}
	I1003 20:42:14.089989    4280 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:42:14.150626    4280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:42:14.150638    4280 machine.go:96] duration metric: took 470.8375ms to provisionDockerMachine
	I1003 20:42:14.150643    4280 start.go:293] postStartSetup for "running-upgrade-902000" (driver="qemu2")
	I1003 20:42:14.150649    4280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:42:14.150720    4280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:42:14.150729    4280 sshutil.go:53] new ssh client: &{IP:localhost Port:50248 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/running-upgrade-902000/id_rsa Username:docker}
	I1003 20:42:14.187111    4280 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:42:14.188363    4280 info.go:137] Remote host: Buildroot 2021.02.12
	I1003 20:42:14.188372    4280 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1040/.minikube/addons for local assets ...
	I1003 20:42:14.188428    4280 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1040/.minikube/files for local assets ...
	I1003 20:42:14.188522    4280 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem -> 15562.pem in /etc/ssl/certs
	I1003 20:42:14.188622    4280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:42:14.191382    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem --> /etc/ssl/certs/15562.pem (1708 bytes)
	I1003 20:42:14.198164    4280 start.go:296] duration metric: took 47.515791ms for postStartSetup
	I1003 20:42:14.198177    4280 fix.go:56] duration metric: took 530.635125ms for fixHost
	I1003 20:42:14.198222    4280 main.go:141] libmachine: Using SSH client type: native
	I1003 20:42:14.198325    4280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100be9c00] 0x100bec440 <nil>  [] 0s} localhost 50248 <nil> <nil>}
	I1003 20:42:14.198330    4280 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:42:14.256633    4280 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728013334.226716599
	
	I1003 20:42:14.256641    4280 fix.go:216] guest clock: 1728013334.226716599
	I1003 20:42:14.256644    4280 fix.go:229] Guest: 2024-10-03 20:42:14.226716599 -0700 PDT Remote: 2024-10-03 20:42:14.198179 -0700 PDT m=+0.641270418 (delta=28.537599ms)
	I1003 20:42:14.256654    4280 fix.go:200] guest clock delta is within tolerance: 28.537599ms
	I1003 20:42:14.256659    4280 start.go:83] releasing machines lock for "running-upgrade-902000", held for 589.123375ms
	I1003 20:42:14.256716    4280 ssh_runner.go:195] Run: cat /version.json
	I1003 20:42:14.256725    4280 sshutil.go:53] new ssh client: &{IP:localhost Port:50248 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/running-upgrade-902000/id_rsa Username:docker}
	I1003 20:42:14.257427    4280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:42:14.257449    4280 sshutil.go:53] new ssh client: &{IP:localhost Port:50248 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/running-upgrade-902000/id_rsa Username:docker}
	W1003 20:42:14.292213    4280 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1003 20:42:14.292267    4280 ssh_runner.go:195] Run: systemctl --version
	I1003 20:42:14.335049    4280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:42:14.336890    4280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:42:14.336924    4280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1003 20:42:14.339959    4280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1003 20:42:14.344431    4280 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:42:14.344438    4280 start.go:495] detecting cgroup driver to use...
	I1003 20:42:14.344496    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:42:14.349522    4280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1003 20:42:14.352395    4280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:42:14.355807    4280 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:42:14.355835    4280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:42:14.359207    4280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:42:14.362746    4280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:42:14.366026    4280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:42:14.368815    4280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:42:14.372008    4280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:42:14.375114    4280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:42:14.378012    4280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:42:14.380967    4280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:42:14.383887    4280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:42:14.386971    4280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:42:14.473431    4280 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:42:14.479538    4280 start.go:495] detecting cgroup driver to use...
	I1003 20:42:14.479614    4280 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:42:14.487698    4280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:42:14.492336    4280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:42:14.504954    4280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:42:14.509604    4280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:42:14.513877    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:42:14.519337    4280 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:42:14.520620    4280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:42:14.523583    4280 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1003 20:42:14.528831    4280 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:42:14.608473    4280 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:42:14.703137    4280 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:42:14.703213    4280 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:42:14.708698    4280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:42:14.795479    4280 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:42:16.462642    4280 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.667147167s)
	I1003 20:42:16.462718    4280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:42:16.467249    4280 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1003 20:42:16.473840    4280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:42:16.478522    4280 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:42:16.545379    4280 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:42:16.627098    4280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:42:16.701607    4280 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:42:16.707601    4280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:42:16.712494    4280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:42:16.783410    4280 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:42:16.823245    4280 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:42:16.823324    4280 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:42:16.826695    4280 start.go:563] Will wait 60s for crictl version
	I1003 20:42:16.826749    4280 ssh_runner.go:195] Run: which crictl
	I1003 20:42:16.828216    4280 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:42:16.840097    4280 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1003 20:42:16.840185    4280 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:42:16.852254    4280 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:42:16.875256    4280 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1003 20:42:16.875411    4280 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1003 20:42:16.876812    4280 kubeadm.go:883] updating cluster {Name:running-upgrade-902000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50280 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-902000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1003 20:42:16.876858    4280 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1003 20:42:16.876900    4280 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:42:16.893353    4280 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:42:16.893362    4280 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1003 20:42:16.893423    4280 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:42:16.896464    4280 ssh_runner.go:195] Run: which lz4
	I1003 20:42:16.897828    4280 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:42:16.899007    4280 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:42:16.899017    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1003 20:42:17.864647    4280 docker.go:649] duration metric: took 966.862ms to copy over tarball
	I1003 20:42:17.864722    4280 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:42:18.972995    4280 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.108259125s)
	I1003 20:42:18.973009    4280 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:42:18.988630    4280 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:42:18.991850    4280 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1003 20:42:18.996979    4280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:42:19.087578    4280 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:42:20.443638    4280 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.356043375s)
	I1003 20:42:20.443732    4280 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:42:20.456880    4280 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:42:20.456889    4280 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1003 20:42:20.456893    4280 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1003 20:42:20.460944    4280 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:42:20.462764    4280 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:42:20.465162    4280 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:42:20.465192    4280 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:42:20.466962    4280 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:42:20.467258    4280 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:42:20.468639    4280 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:42:20.468891    4280 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1003 20:42:20.470385    4280 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:42:20.470407    4280 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1003 20:42:20.472117    4280 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1003 20:42:20.472207    4280 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:42:20.473344    4280 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1003 20:42:20.473499    4280 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:42:20.474033    4280 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:42:20.475822    4280 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:42:22.627836    4280 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:42:22.664894    4280 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1003 20:42:22.664947    4280 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:42:22.665075    4280 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:42:22.686357    4280 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1003 20:42:22.693562    4280 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:42:22.694809    4280 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:42:22.711816    4280 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1003 20:42:22.711841    4280 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:42:22.711917    4280 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:42:22.714046    4280 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1003 20:42:22.714062    4280 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:42:22.714114    4280 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:42:22.730903    4280 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1003 20:42:22.733830    4280 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1003 20:42:22.808069    4280 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1003 20:42:22.824407    4280 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1003 20:42:22.824433    4280 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1003 20:42:22.824510    4280 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1003 20:42:22.835859    4280 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1003 20:42:22.922476    4280 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1003 20:42:22.922675    4280 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:42:22.940714    4280 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1003 20:42:22.940743    4280 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:42:22.940822    4280 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	W1003 20:42:23.011814    4280 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1003 20:42:23.011989    4280 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:42:23.059184    4280 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1003 20:42:23.071957    4280 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:42:23.817881    4280 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1003 20:42:23.818062    4280 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1003 20:42:23.818140    4280 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:42:23.818297    4280 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:42:23.818414    4280 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1003 20:42:23.818431    4280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1003 20:42:23.818437    4280 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1003 20:42:23.818508    4280 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1003 20:42:23.818560    4280 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1003 20:42:23.818581    4280 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:42:23.818647    4280 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:42:23.866034    4280 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1003 20:42:23.866050    4280 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1003 20:42:23.866110    4280 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1003 20:42:23.866129    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1003 20:42:23.866214    4280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1003 20:42:23.866322    4280 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1003 20:42:23.866424    4280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1003 20:42:23.879882    4280 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1003 20:42:23.879909    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1003 20:42:23.879908    4280 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1003 20:42:23.879944    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1003 20:42:23.899711    4280 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1003 20:42:23.899726    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1003 20:42:23.967416    4280 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1003 20:42:23.967438    4280 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1003 20:42:23.967444    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1003 20:42:24.207716    4280 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1003 20:42:24.207741    4280 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1003 20:42:24.207748    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1003 20:42:24.243533    4280 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1003 20:42:24.243569    4280 cache_images.go:92] duration metric: took 3.786669125s to LoadCachedImages
	W1003 20:42:24.243610    4280 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I1003 20:42:24.243615    4280 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1003 20:42:24.243667    4280 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-902000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-902000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:42:24.243738    4280 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:42:24.257021    4280 cni.go:84] Creating CNI manager for ""
	I1003 20:42:24.257038    4280 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:42:24.257044    4280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:42:24.257053    4280 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-902000 NodeName:running-upgrade-902000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:42:24.257120    4280 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-902000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:42:24.257188    4280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1003 20:42:24.260206    4280 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:42:24.260243    4280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 20:42:24.263391    4280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1003 20:42:24.268571    4280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:42:24.273576    4280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1003 20:42:24.279028    4280 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1003 20:42:24.280352    4280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:42:24.359866    4280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:42:24.365508    4280 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000 for IP: 10.0.2.15
	I1003 20:42:24.365520    4280 certs.go:194] generating shared ca certs ...
	I1003 20:42:24.365529    4280 certs.go:226] acquiring lock for ca certs: {Name:mke7121fb3a343b392a0b01a3f973157c3dad296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:42:24.365690    4280 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.key
	I1003 20:42:24.365724    4280 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.key
	I1003 20:42:24.365729    4280 certs.go:256] generating profile certs ...
	I1003 20:42:24.365791    4280 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/client.key
	I1003 20:42:24.365806    4280 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.key.1ba95d07
	I1003 20:42:24.365817    4280 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.crt.1ba95d07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1003 20:42:24.498696    4280 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.crt.1ba95d07 ...
	I1003 20:42:24.498702    4280 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.crt.1ba95d07: {Name:mk1472411ebbaa1fc259983c8d3e8ae806024435 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:42:24.499083    4280 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.key.1ba95d07 ...
	I1003 20:42:24.499088    4280 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.key.1ba95d07: {Name:mk061ba29af11fd1790e9bb26774afbfd226455c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:42:24.499264    4280 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.crt.1ba95d07 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.crt
	I1003 20:42:24.499393    4280 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.key.1ba95d07 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.key
	I1003 20:42:24.499553    4280 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/proxy-client.key
	I1003 20:42:24.499694    4280 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556.pem (1338 bytes)
	W1003 20:42:24.499724    4280 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556_empty.pem, impossibly tiny 0 bytes
	I1003 20:42:24.499730    4280 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:42:24.499758    4280 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem (1078 bytes)
	I1003 20:42:24.499778    4280 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:42:24.499795    4280 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem (1675 bytes)
	I1003 20:42:24.499833    4280 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem (1708 bytes)
	I1003 20:42:24.500176    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:42:24.508209    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 20:42:24.515950    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:42:24.523618    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:42:24.531601    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 20:42:24.538662    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 20:42:24.545436    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:42:24.552384    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 20:42:24.559688    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:42:24.566920    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556.pem --> /usr/share/ca-certificates/1556.pem (1338 bytes)
	I1003 20:42:24.574057    4280 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem --> /usr/share/ca-certificates/15562.pem (1708 bytes)
	I1003 20:42:24.580831    4280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:42:24.585755    4280 ssh_runner.go:195] Run: openssl version
	I1003 20:42:24.587613    4280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:42:24.591005    4280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:42:24.592557    4280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:42:24.592580    4280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:42:24.594355    4280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:42:24.596931    4280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1556.pem && ln -fs /usr/share/ca-certificates/1556.pem /etc/ssl/certs/1556.pem"
	I1003 20:42:24.600354    4280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1556.pem
	I1003 20:42:24.601745    4280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:05 /usr/share/ca-certificates/1556.pem
	I1003 20:42:24.601771    4280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1556.pem
	I1003 20:42:24.603429    4280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1556.pem /etc/ssl/certs/51391683.0"
	I1003 20:42:24.606473    4280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15562.pem && ln -fs /usr/share/ca-certificates/15562.pem /etc/ssl/certs/15562.pem"
	I1003 20:42:24.609288    4280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15562.pem
	I1003 20:42:24.610727    4280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:05 /usr/share/ca-certificates/15562.pem
	I1003 20:42:24.610750    4280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15562.pem
	I1003 20:42:24.612698    4280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15562.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:42:24.615788    4280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:42:24.617480    4280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 20:42:24.619281    4280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 20:42:24.621242    4280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 20:42:24.623112    4280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 20:42:24.625228    4280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 20:42:24.627203    4280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 20:42:24.628982    4280 kubeadm.go:392] StartCluster: {Name:running-upgrade-902000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50280 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-902000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1003 20:42:24.629059    4280 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:42:24.638977    4280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:42:24.642489    4280 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1003 20:42:24.642500    4280 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1003 20:42:24.642536    4280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 20:42:24.645213    4280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:42:24.645441    4280 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-902000" does not appear in /Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:42:24.645491    4280 kubeconfig.go:62] /Users/jenkins/minikube-integration/19546-1040/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-902000" cluster setting kubeconfig missing "running-upgrade-902000" context setting]
	I1003 20:42:24.645623    4280 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/kubeconfig: {Name:mk3ee3e45466495ab1092989494e731c3b1eb95d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:42:24.646782    4280 kapi.go:59] client config for running-upgrade-902000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1021c25d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:42:24.647128    4280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 20:42:24.650096    4280 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-902000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1003 20:42:24.650101    4280 kubeadm.go:1160] stopping kube-system containers ...
	I1003 20:42:24.650146    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:42:24.661083    4280 docker.go:483] Stopping containers: [c986ac2733a3 6978d980267b 684447ebed9f e069c6569d0d d495a53ce56f 01d2ddfaacd4 c21a6a4f15b9 fbfb303c2ba7 db564f93edf1 49b59aba2840 19ed3440f6a0 98d968c38205]
	I1003 20:42:24.661156    4280 ssh_runner.go:195] Run: docker stop c986ac2733a3 6978d980267b 684447ebed9f e069c6569d0d d495a53ce56f 01d2ddfaacd4 c21a6a4f15b9 fbfb303c2ba7 db564f93edf1 49b59aba2840 19ed3440f6a0 98d968c38205
	I1003 20:42:24.671981    4280 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1003 20:42:24.769381    4280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:42:24.774138    4280 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct  4 03:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Oct  4 03:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct  4 03:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct  4 03:41 /etc/kubernetes/scheduler.conf
	
	I1003 20:42:24.774178    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/admin.conf
	I1003 20:42:24.777858    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:42:24.777907    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:42:24.781434    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/kubelet.conf
	I1003 20:42:24.784581    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:42:24.784621    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:42:24.788041    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/controller-manager.conf
	I1003 20:42:24.791278    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:42:24.791311    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:42:24.794478    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/scheduler.conf
	I1003 20:42:24.797443    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:42:24.797469    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:42:24.800063    4280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:42:24.803347    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:42:24.834558    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:42:25.138315    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:42:25.343099    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:42:25.384158    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:42:25.430178    4280 api_server.go:52] waiting for apiserver process to appear ...
	I1003 20:42:25.430274    4280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:42:25.932611    4280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:42:26.432385    4280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:42:26.436323    4280 api_server.go:72] duration metric: took 1.006148333s to wait for apiserver process to appear ...
	I1003 20:42:26.436331    4280 api_server.go:88] waiting for apiserver healthz status ...
	I1003 20:42:26.436346    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:42:31.438470    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:42:31.438530    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:42:36.438955    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:42:36.439025    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:42:41.439645    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:42:41.439732    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:42:46.440922    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:42:46.441036    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:42:51.442703    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:42:51.442808    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:42:56.444783    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:42:56.444896    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:43:01.447377    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:43:01.447465    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:43:06.450083    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:43:06.450171    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:43:11.452839    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:43:11.453030    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:43:16.455783    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:43:16.455843    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:43:21.456654    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:43:21.456678    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:43:26.458842    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:43:26.458961    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:43:26.469764    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:43:26.469857    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:43:26.480644    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:43:26.480721    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:43:26.490942    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:43:26.491025    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:43:26.501237    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:43:26.501314    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:43:26.511371    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:43:26.511445    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:43:26.521680    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:43:26.521770    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:43:26.531750    4280 logs.go:282] 0 containers: []
	W1003 20:43:26.531763    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:43:26.531833    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:43:26.542030    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:43:26.542062    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:43:26.542067    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:43:26.546887    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:43:26.546895    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:43:26.561925    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:43:26.561934    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:43:26.576625    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:43:26.576634    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:43:26.648044    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:43:26.648054    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:43:26.665290    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:43:26.665300    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:43:26.677155    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:43:26.677165    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:43:26.690215    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:43:26.690226    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:43:26.706320    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:43:26.706330    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:43:26.718820    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:43:26.718830    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:43:26.729935    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:43:26.729943    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:43:26.755808    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:43:26.755818    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:43:26.766631    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:43:26.766641    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:43:26.779583    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:43:26.779594    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:43:26.815109    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:43:26.815120    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:43:26.829653    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:43:26.829663    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:43:26.849369    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:43:26.849379    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:43:29.365202    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:43:34.367856    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:43:34.368446    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:43:34.411651    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:43:34.411803    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:43:34.437079    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:43:34.437175    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:43:34.451781    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:43:34.451871    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:43:34.467214    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:43:34.467293    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:43:34.477885    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:43:34.477971    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:43:34.496621    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:43:34.496703    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:43:34.515717    4280 logs.go:282] 0 containers: []
	W1003 20:43:34.515729    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:43:34.515797    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:43:34.526719    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:43:34.526744    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:43:34.526749    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:43:34.544273    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:43:34.544282    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:43:34.570648    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:43:34.570658    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:43:34.582491    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:43:34.582501    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:43:34.596625    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:43:34.596634    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:43:34.632359    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:43:34.632371    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:43:34.659863    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:43:34.659874    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:43:34.664091    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:43:34.664099    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:43:34.680410    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:43:34.680421    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:43:34.691811    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:43:34.691824    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:43:34.704858    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:43:34.704868    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:43:34.742616    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:43:34.742625    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:43:34.756616    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:43:34.756626    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:43:34.767535    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:43:34.767546    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:43:34.781865    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:43:34.781874    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:43:34.793049    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:43:34.793058    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:43:34.808135    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:43:34.808147    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:43:37.324566    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:43:42.327368    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:43:42.327830    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:43:42.362637    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:43:42.362790    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:43:42.388207    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:43:42.388313    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:43:42.405640    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:43:42.405719    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:43:42.417315    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:43:42.417396    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:43:42.428523    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:43:42.428598    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:43:42.439223    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:43:42.439304    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:43:42.449743    4280 logs.go:282] 0 containers: []
	W1003 20:43:42.449754    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:43:42.449820    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:43:42.461221    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:43:42.461241    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:43:42.461248    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:43:42.475959    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:43:42.475968    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:43:42.489774    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:43:42.489784    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:43:42.501056    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:43:42.501067    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:43:42.515753    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:43:42.515761    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:43:42.528929    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:43:42.528937    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:43:42.541165    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:43:42.541177    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:43:42.545849    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:43:42.545855    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:43:42.580905    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:43:42.580913    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:43:42.595658    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:43:42.595669    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:43:42.610344    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:43:42.610353    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:43:42.636281    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:43:42.636290    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:43:42.648195    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:43:42.648205    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:43:42.668078    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:43:42.668088    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:43:42.682335    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:43:42.682344    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:43:42.694849    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:43:42.694859    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:43:42.712310    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:43:42.712319    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:43:45.249491    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:43:50.252245    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:43:50.252825    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:43:50.289311    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:43:50.289471    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:43:50.312042    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:43:50.312132    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:43:50.329350    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:43:50.329442    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:43:50.341117    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:43:50.341195    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:43:50.352348    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:43:50.352423    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:43:50.362947    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:43:50.363022    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:43:50.378343    4280 logs.go:282] 0 containers: []
	W1003 20:43:50.378353    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:43:50.378417    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:43:50.391078    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:43:50.391096    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:43:50.391102    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:43:50.408685    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:43:50.408698    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:43:50.421562    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:43:50.421572    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:43:50.435623    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:43:50.435634    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:43:50.451326    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:43:50.451336    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:43:50.463014    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:43:50.463024    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:43:50.475206    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:43:50.475217    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:43:50.486875    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:43:50.486886    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:43:50.491799    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:43:50.491811    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:43:50.537322    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:43:50.537335    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:43:50.553256    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:43:50.553266    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:43:50.564772    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:43:50.564782    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:43:50.576336    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:43:50.576346    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:43:50.601671    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:43:50.601678    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:43:50.639038    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:43:50.639047    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:43:50.658361    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:43:50.658371    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:43:50.672375    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:43:50.672385    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:43:53.189202    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:43:58.190766    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:43:58.190925    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:43:58.209493    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:43:58.209564    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:43:58.222378    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:43:58.222431    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:43:58.233394    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:43:58.233461    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:43:58.243758    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:43:58.243823    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:43:58.255027    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:43:58.255093    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:43:58.269769    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:43:58.269847    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:43:58.280247    4280 logs.go:282] 0 containers: []
	W1003 20:43:58.280258    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:43:58.280320    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:43:58.290766    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:43:58.290785    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:43:58.290789    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:43:58.326806    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:43:58.326813    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:43:58.345391    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:43:58.345400    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:43:58.358684    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:43:58.358694    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:43:58.363057    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:43:58.363064    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:43:58.376900    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:43:58.376913    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:43:58.391856    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:43:58.391868    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:43:58.402924    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:43:58.402935    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:43:58.416859    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:43:58.416871    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:43:58.435565    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:43:58.435576    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:43:58.447344    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:43:58.447354    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:43:58.459504    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:43:58.459516    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:43:58.494257    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:43:58.494271    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:43:58.513785    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:43:58.513796    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:43:58.525397    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:43:58.525411    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:43:58.550400    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:43:58.550410    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:43:58.564834    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:43:58.564849    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:44:01.080037    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:44:06.082932    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:44:06.083484    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:44:06.125150    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:44:06.125299    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:44:06.147785    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:44:06.147908    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:44:06.162699    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:44:06.162772    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:44:06.175007    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:44:06.175075    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:44:06.186183    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:44:06.186258    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:44:06.196949    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:44:06.197034    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:44:06.206799    4280 logs.go:282] 0 containers: []
	W1003 20:44:06.206812    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:44:06.206868    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:44:06.217626    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:44:06.217643    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:44:06.217648    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:44:06.236329    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:44:06.236338    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:44:06.252611    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:44:06.252621    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:44:06.289394    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:44:06.289404    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:44:06.304049    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:44:06.304058    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:44:06.321433    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:44:06.321442    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:44:06.332805    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:44:06.332814    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:44:06.344732    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:44:06.344741    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:44:06.359047    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:44:06.359057    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:44:06.370706    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:44:06.370717    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:44:06.385556    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:44:06.385565    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:44:06.400642    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:44:06.400653    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:44:06.426411    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:44:06.426421    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:44:06.430705    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:44:06.430713    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:44:06.444422    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:44:06.444431    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:44:06.455837    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:44:06.455845    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:44:06.490941    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:44:06.490947    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:44:09.004369    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:44:14.007205    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:44:14.007520    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:44:14.036863    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:44:14.036997    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:44:14.055051    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:44:14.055133    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:44:14.070581    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:44:14.070661    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:44:14.082267    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:44:14.082344    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:44:14.092608    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:44:14.092676    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:44:14.102976    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:44:14.103048    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:44:14.116857    4280 logs.go:282] 0 containers: []
	W1003 20:44:14.116866    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:44:14.116920    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:44:14.127335    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:44:14.127354    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:44:14.127360    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:44:14.138743    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:44:14.138753    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:44:14.155829    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:44:14.155843    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:44:14.159942    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:44:14.159948    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:44:14.178619    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:44:14.178631    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:44:14.192982    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:44:14.192998    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:44:14.204214    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:44:14.204227    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:44:14.225788    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:44:14.225801    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:44:14.237750    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:44:14.237764    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:44:14.277808    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:44:14.277821    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:44:14.312279    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:44:14.312292    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:44:14.326300    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:44:14.326312    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:44:14.344609    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:44:14.344620    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:44:14.355990    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:44:14.356001    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:44:14.370334    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:44:14.370346    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:44:14.397007    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:44:14.397019    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:44:14.410539    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:44:14.410550    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:44:16.936035    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:44:21.938400    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:44:21.938537    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:44:21.950356    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:44:21.950440    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:44:21.961259    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:44:21.961345    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:44:21.972141    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:44:21.972212    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:44:21.982518    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:44:21.982603    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:44:21.992649    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:44:21.992727    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:44:22.006862    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:44:22.006930    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:44:22.017000    4280 logs.go:282] 0 containers: []
	W1003 20:44:22.017012    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:44:22.017070    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:44:22.027835    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:44:22.027853    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:44:22.027859    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:44:22.032158    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:44:22.032167    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:44:22.046026    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:44:22.046037    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:44:22.057934    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:44:22.057946    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:44:22.072186    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:44:22.072196    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:44:22.084010    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:44:22.084020    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:44:22.108394    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:44:22.108401    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:44:22.127806    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:44:22.127816    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:44:22.139809    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:44:22.139819    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:44:22.152348    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:44:22.152359    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:44:22.189094    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:44:22.189101    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:44:22.230178    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:44:22.230194    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:44:22.244524    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:44:22.244535    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:44:22.258792    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:44:22.258802    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:44:22.274011    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:44:22.274021    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:44:22.291555    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:44:22.291565    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:44:22.306513    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:44:22.306522    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:44:24.821946    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:44:29.824340    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:44:29.824874    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:44:29.860808    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:44:29.860969    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:44:29.881313    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:44:29.881452    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:44:29.896143    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:44:29.896230    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:44:29.908054    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:44:29.908131    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:44:29.918488    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:44:29.918573    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:44:29.929117    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:44:29.929198    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:44:29.944354    4280 logs.go:282] 0 containers: []
	W1003 20:44:29.944365    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:44:29.944430    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:44:29.954756    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:44:29.954774    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:44:29.954779    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:44:29.970559    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:44:29.970569    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:44:29.984754    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:44:29.984765    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:44:30.022498    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:44:30.022510    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:44:30.033749    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:44:30.033762    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:44:30.049696    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:44:30.049706    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:44:30.074897    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:44:30.074904    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:44:30.089182    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:44:30.089191    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:44:30.124066    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:44:30.124076    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:44:30.144792    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:44:30.144803    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:44:30.156252    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:44:30.156264    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:44:30.167972    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:44:30.167982    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:44:30.180149    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:44:30.180164    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:44:30.184299    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:44:30.184306    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:44:30.198480    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:44:30.198493    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:44:30.219564    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:44:30.219574    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:44:30.230374    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:44:30.230383    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:44:32.746523    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:44:37.748717    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:44:37.748845    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:44:37.759913    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:44:37.759994    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:44:37.771164    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:44:37.771236    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:44:37.782105    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:44:37.782198    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:44:37.792571    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:44:37.792662    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:44:37.803152    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:44:37.803231    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:44:37.814385    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:44:37.814464    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:44:37.824902    4280 logs.go:282] 0 containers: []
	W1003 20:44:37.824913    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:44:37.824983    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:44:37.835987    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:44:37.836007    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:44:37.836012    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:44:37.854726    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:44:37.854744    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:44:37.869300    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:44:37.869310    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:44:37.886783    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:44:37.886794    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:44:37.898946    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:44:37.898963    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:44:37.903391    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:44:37.903401    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:44:37.944946    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:44:37.944956    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:44:37.956054    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:44:37.956066    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:44:37.969291    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:44:37.969301    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:44:37.993900    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:44:37.993914    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:44:38.031062    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:44:38.031070    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:44:38.045786    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:44:38.045799    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:44:38.061333    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:44:38.061347    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:44:38.073229    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:44:38.073243    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:44:38.086064    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:44:38.086074    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:44:38.100062    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:44:38.100075    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:44:38.118725    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:44:38.118735    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:44:40.632624    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:44:45.634954    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:44:45.635442    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:44:45.675653    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:44:45.675811    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:44:45.696853    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:44:45.696996    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:44:45.712294    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:44:45.712388    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:44:45.724755    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:44:45.724853    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:44:45.736251    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:44:45.736340    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:44:45.747315    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:44:45.747394    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:44:45.757837    4280 logs.go:282] 0 containers: []
	W1003 20:44:45.757849    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:44:45.757913    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:44:45.768820    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:44:45.768837    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:44:45.768843    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:44:45.781120    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:44:45.781132    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:44:45.786148    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:44:45.786154    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:44:45.800414    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:44:45.800427    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:44:45.812214    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:44:45.812227    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:44:45.833179    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:44:45.833192    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:44:45.846819    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:44:45.846832    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:44:45.875761    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:44:45.875778    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:44:45.911384    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:44:45.911392    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:44:45.945437    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:44:45.945450    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:44:45.959493    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:44:45.959502    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:44:45.971231    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:44:45.971245    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:44:45.990076    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:44:45.990089    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:44:46.004343    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:44:46.004356    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:44:46.018588    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:44:46.018597    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:44:46.033340    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:44:46.033353    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:44:46.049250    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:44:46.049263    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:44:48.562535    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:44:53.563314    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:44:53.563427    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:44:53.579809    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:44:53.579910    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:44:53.595865    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:44:53.595949    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:44:53.608364    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:44:53.608444    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:44:53.620599    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:44:53.620683    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:44:53.632130    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:44:53.632214    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:44:53.643819    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:44:53.643904    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:44:53.655390    4280 logs.go:282] 0 containers: []
	W1003 20:44:53.655404    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:44:53.655477    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:44:53.667342    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:44:53.667360    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:44:53.667365    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:44:53.680764    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:44:53.680780    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:44:53.695405    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:44:53.695417    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:44:53.707895    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:44:53.707906    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:44:53.723427    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:44:53.723441    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:44:53.743330    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:44:53.743346    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:44:53.757597    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:44:53.757607    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:44:53.782219    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:44:53.782231    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:44:53.796346    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:44:53.796356    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:44:53.832553    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:44:53.832569    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:44:53.843812    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:44:53.843823    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:44:53.866821    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:44:53.866828    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:44:53.870975    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:44:53.870983    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:44:53.905124    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:44:53.905133    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:44:53.917510    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:44:53.917523    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:44:53.935730    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:44:53.935740    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:44:53.947183    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:44:53.947193    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:44:56.463118    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:01.465412    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:01.465571    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:01.481856    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:01.481967    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:01.495374    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:01.495457    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:01.506849    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:01.506930    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:01.517592    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:01.517675    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:01.527510    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:01.527590    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:01.538018    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:01.538096    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:01.548569    4280 logs.go:282] 0 containers: []
	W1003 20:45:01.548581    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:01.548648    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:01.560874    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:01.560894    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:01.560900    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:01.599435    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:01.599447    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:01.619081    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:01.619090    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:01.631666    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:01.631675    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:01.643015    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:01.643026    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:01.658252    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:01.658263    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:01.677144    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:01.677153    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:01.691735    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:01.691747    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:01.703351    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:01.703362    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:01.714662    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:01.714673    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:01.750401    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:01.750419    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:01.754884    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:01.754891    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:01.768276    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:01.768285    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:01.791454    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:01.791461    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:01.802764    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:01.802777    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:01.816762    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:01.816772    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:01.828055    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:01.828067    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:04.347070    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:09.349858    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:09.350429    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:09.394625    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:09.394783    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:09.415756    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:09.415861    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:09.430549    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:09.430620    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:09.443337    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:09.443420    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:09.453968    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:09.454037    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:09.468778    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:09.468847    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:09.478953    4280 logs.go:282] 0 containers: []
	W1003 20:45:09.478964    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:09.479032    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:09.489874    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:09.489895    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:09.489904    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:09.509455    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:09.509464    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:09.532987    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:09.532998    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:09.545555    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:09.545565    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:09.559155    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:09.559167    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:09.574171    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:09.574187    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:09.585711    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:09.585721    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:09.600648    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:09.600656    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:09.623744    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:09.623752    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:09.638546    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:09.638555    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:09.652496    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:09.652504    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:09.663910    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:09.663920    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:09.690393    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:09.690412    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:09.729877    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:09.729897    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:09.734607    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:09.734614    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:09.770677    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:09.770690    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:09.785593    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:09.785612    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:12.304959    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:17.307589    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:17.307783    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:17.327573    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:17.327658    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:17.338755    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:17.338840    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:17.352705    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:17.352770    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:17.367021    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:17.367094    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:17.378338    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:17.378396    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:17.391119    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:17.391182    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:17.405726    4280 logs.go:282] 0 containers: []
	W1003 20:45:17.405742    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:17.405804    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:17.415795    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:17.415813    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:17.415820    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:17.429817    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:17.429826    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:17.444854    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:17.444864    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:17.456762    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:17.456773    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:17.470833    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:17.470845    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:17.485158    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:17.485167    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:17.511109    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:17.511115    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:17.525765    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:17.525774    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:17.542253    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:17.542265    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:17.557411    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:17.557422    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:17.597194    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:17.597204    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:17.602132    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:17.602141    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:17.637465    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:17.637478    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:17.651582    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:17.651592    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:17.671636    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:17.671649    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:17.691055    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:17.691064    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:17.702581    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:17.702590    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:20.216743    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:25.219122    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:25.219677    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:25.260792    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:25.260955    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:25.282496    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:25.282622    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:25.300890    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:25.300973    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:25.313075    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:25.313160    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:25.323696    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:25.323781    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:25.338080    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:25.338166    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:25.365896    4280 logs.go:282] 0 containers: []
	W1003 20:45:25.365910    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:25.365989    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:25.385189    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:25.385209    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:25.385214    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:25.411622    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:25.411633    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:25.423047    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:25.423057    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:25.440651    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:25.440662    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:25.458754    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:25.458769    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:25.495003    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:25.495014    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:25.499333    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:25.499342    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:25.514600    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:25.514611    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:25.532876    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:25.532889    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:25.547372    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:25.547383    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:25.562791    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:25.562805    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:25.600363    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:25.600378    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:25.619682    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:25.619694    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:25.631574    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:25.631587    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:25.654009    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:25.654016    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:25.665920    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:25.665935    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:25.680701    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:25.680710    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:28.193459    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:33.196222    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:33.196336    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:33.212864    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:33.212945    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:33.232212    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:33.232301    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:33.244607    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:33.244695    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:33.256145    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:33.256232    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:33.269177    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:33.269376    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:33.285192    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:33.285276    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:33.297042    4280 logs.go:282] 0 containers: []
	W1003 20:45:33.297055    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:33.297133    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:33.309370    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:33.309389    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:33.309395    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:33.325105    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:33.325118    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:33.338311    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:33.338325    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:33.351310    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:33.351324    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:33.393997    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:33.394009    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:33.408825    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:33.408838    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:33.423034    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:33.423051    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:33.460564    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:33.460586    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:33.483161    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:33.483175    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:33.500714    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:33.500733    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:33.522773    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:33.522787    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:33.542259    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:33.542273    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:33.556542    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:33.556557    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:33.582856    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:33.582881    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:33.587763    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:33.587776    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:33.600606    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:33.600617    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:33.613253    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:33.613266    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:36.133303    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:41.134134    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:41.134297    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:41.145772    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:41.145854    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:41.160779    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:41.160860    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:41.171767    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:41.171823    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:41.182878    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:41.182954    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:41.197087    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:41.197159    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:41.209013    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:41.209094    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:41.220536    4280 logs.go:282] 0 containers: []
	W1003 20:45:41.220549    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:41.220615    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:41.231647    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:41.231665    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:41.231670    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:41.271686    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:41.271701    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:41.310932    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:41.310945    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:41.325555    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:41.325567    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:41.344983    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:41.344994    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:41.362441    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:41.362453    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:41.374545    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:41.374557    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:41.396821    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:41.396840    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:41.412336    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:41.412352    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:41.434235    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:41.434246    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:41.452124    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:41.452136    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:41.470140    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:41.470151    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:41.494199    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:41.494208    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:41.498901    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:41.498910    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:41.514263    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:41.514276    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:41.526566    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:41.526579    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:41.541185    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:41.541200    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:44.056208    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:49.058454    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:49.058687    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:49.081340    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:49.081456    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:49.096924    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:49.097018    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:49.110996    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:49.111080    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:49.122042    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:49.122121    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:49.132524    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:49.132603    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:49.143148    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:49.143229    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:49.153434    4280 logs.go:282] 0 containers: []
	W1003 20:45:49.153451    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:49.153520    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:49.164326    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:49.164345    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:49.164350    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:49.178617    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:49.178628    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:49.183186    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:49.183192    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:49.197461    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:49.197471    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:49.208776    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:49.208786    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:49.222256    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:49.222266    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:49.234154    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:49.234170    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:49.249025    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:49.249035    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:49.265608    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:49.265618    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:49.277142    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:49.277153    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:49.289703    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:49.289715    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:49.302956    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:49.302968    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:49.325598    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:49.325605    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:49.362114    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:49.362123    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:49.399966    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:49.399981    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:49.421946    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:49.421960    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:49.436630    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:49.436643    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:51.949475    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:56.951782    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:56.951970    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:56.967802    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:56.967893    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:56.979881    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:56.979960    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:56.990839    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:56.990917    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:57.002144    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:57.002226    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:57.016011    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:57.016088    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:57.031373    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:57.031447    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:57.041432    4280 logs.go:282] 0 containers: []
	W1003 20:45:57.041447    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:57.041504    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:57.055714    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:57.055731    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:57.055737    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:57.070133    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:57.070143    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:57.083789    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:57.083800    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:57.110697    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:57.110707    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:57.123238    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:57.123248    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:57.138036    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:57.138046    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:57.152665    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:57.152676    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:57.164948    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:57.164958    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:57.202265    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:57.202281    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:57.239746    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:57.239756    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:57.258918    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:57.258929    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:57.274251    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:57.274263    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:57.285826    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:57.285837    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:57.309803    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:57.309810    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:57.314587    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:57.314592    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:57.325858    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:57.325868    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:57.343595    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:57.343608    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:59.859926    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:04.862220    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:04.862481    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:04.882893    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:46:04.883006    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:04.897205    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:46:04.897299    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:04.909543    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:46:04.909629    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:04.920380    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:46:04.920458    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:04.930909    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:46:04.930988    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:04.941883    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:46:04.941963    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:04.952051    4280 logs.go:282] 0 containers: []
	W1003 20:46:04.952061    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:04.952129    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:04.962921    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:46:04.962941    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:46:04.962947    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:46:04.979816    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:46:04.979831    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:46:04.990924    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:04.990933    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:05.024992    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:46:05.025008    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:46:05.039645    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:46:05.039655    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:46:05.063205    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:46:05.063215    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:46:05.080826    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:46:05.080836    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:46:05.092155    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:05.092166    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:05.114522    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:05.114530    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:05.151096    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:05.151104    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:05.155502    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:46:05.155512    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:46:05.173136    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:46:05.173145    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:46:05.190503    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:46:05.190516    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:46:05.212912    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:46:05.212922    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:05.224905    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:46:05.224921    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:46:05.239117    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:46:05.239128    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:46:05.251873    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:46:05.251883    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:46:07.765146    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:12.767860    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:12.768059    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:12.787427    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:46:12.787540    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:12.801960    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:46:12.802050    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:12.814061    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:46:12.814139    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:12.825172    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:46:12.825253    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:12.836298    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:46:12.836378    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:12.847336    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:46:12.847419    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:12.857017    4280 logs.go:282] 0 containers: []
	W1003 20:46:12.857030    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:12.857095    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:12.867601    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:46:12.867624    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:46:12.867629    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:46:12.881664    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:46:12.881674    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:46:12.892700    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:46:12.892714    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:12.905560    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:12.905575    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:12.910622    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:46:12.910631    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:46:12.925007    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:46:12.925016    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:46:12.938621    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:46:12.938631    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:46:12.953722    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:46:12.953744    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:46:12.965032    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:12.965043    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:12.987207    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:12.987214    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:13.022456    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:46:13.022471    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:46:13.041840    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:46:13.041854    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:46:13.055076    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:46:13.055085    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:46:13.072959    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:46:13.072970    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:46:13.086003    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:13.086015    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:13.124483    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:46:13.124499    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:46:13.139028    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:46:13.139039    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:46:15.653804    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:20.656172    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:20.656461    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:20.678962    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:46:20.679100    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:20.693990    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:46:20.694078    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:20.706028    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:46:20.706108    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:20.722746    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:46:20.722826    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:20.734109    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:46:20.734190    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:20.744902    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:46:20.744978    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:20.754938    4280 logs.go:282] 0 containers: []
	W1003 20:46:20.754949    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:20.755013    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:20.765592    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:46:20.765610    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:46:20.765615    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:46:20.784279    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:46:20.784289    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:46:20.796200    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:20.796210    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:20.832162    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:46:20.832171    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:46:20.851583    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:46:20.851594    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:46:20.866905    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:46:20.866916    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:46:20.881698    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:46:20.881707    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:20.911414    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:20.911425    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:20.915640    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:46:20.915649    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:46:20.932755    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:46:20.932765    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:46:20.944001    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:46:20.944015    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:46:20.956714    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:20.956725    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:20.992156    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:46:20.992171    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:46:21.006605    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:46:21.006615    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:46:21.018893    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:46:21.018904    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:46:21.034980    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:46:21.034990    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:46:21.046981    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:21.046990    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:23.570465    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:28.572823    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:28.572930    4280 kubeadm.go:597] duration metric: took 4m3.930379667s to restartPrimaryControlPlane
	W1003 20:46:28.572999    4280 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1003 20:46:28.573028    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1003 20:46:29.554229    4280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:46:29.559382    4280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:46:29.562362    4280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:46:29.565318    4280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:46:29.565323    4280 kubeadm.go:157] found existing configuration files:
	
	I1003 20:46:29.565356    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/admin.conf
	I1003 20:46:29.567692    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:46:29.567720    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:46:29.570277    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/kubelet.conf
	I1003 20:46:29.573008    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:46:29.573037    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:46:29.575467    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/controller-manager.conf
	I1003 20:46:29.578356    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:46:29.578391    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:46:29.581643    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/scheduler.conf
	I1003 20:46:29.584193    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:46:29.584223    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:46:29.586689    4280 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:46:29.603999    4280 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1003 20:46:29.604128    4280 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:46:29.650960    4280 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:46:29.651050    4280 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:46:29.651103    4280 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 20:46:29.699694    4280 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:46:29.703905    4280 out.go:235]   - Generating certificates and keys ...
	I1003 20:46:29.703942    4280 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:46:29.703979    4280 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:46:29.704031    4280 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 20:46:29.704108    4280 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1003 20:46:29.704205    4280 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 20:46:29.704256    4280 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1003 20:46:29.704307    4280 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1003 20:46:29.704369    4280 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1003 20:46:29.704468    4280 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 20:46:29.707648    4280 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 20:46:29.707667    4280 kubeadm.go:310] [certs] Using the existing "sa" key
	I1003 20:46:29.707710    4280 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:46:29.781296    4280 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:46:29.965117    4280 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:46:30.101627    4280 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:46:30.194647    4280 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:46:30.226473    4280 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:46:30.226920    4280 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:46:30.226955    4280 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:46:30.320793    4280 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:46:30.324703    4280 out.go:235]   - Booting up control plane ...
	I1003 20:46:30.324877    4280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:46:30.324960    4280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:46:30.325059    4280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:46:30.325119    4280 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:46:30.325214    4280 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 20:46:34.824780    4280 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502876 seconds
	I1003 20:46:34.824897    4280 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:46:34.847467    4280 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:46:35.360837    4280 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:46:35.361014    4280 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-902000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:46:35.867072    4280 kubeadm.go:310] [bootstrap-token] Using token: 8gn5wk.xe0im0a4rkjxu2gw
	I1003 20:46:35.873791    4280 out.go:235]   - Configuring RBAC rules ...
	I1003 20:46:35.873878    4280 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:46:35.873955    4280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:46:35.880614    4280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:46:35.881917    4280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:46:35.883093    4280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:46:35.884569    4280 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:46:35.888797    4280 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:46:36.035394    4280 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:46:36.272538    4280 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:46:36.272918    4280 kubeadm.go:310] 
	I1003 20:46:36.272953    4280 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:46:36.272956    4280 kubeadm.go:310] 
	I1003 20:46:36.273011    4280 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:46:36.273015    4280 kubeadm.go:310] 
	I1003 20:46:36.273028    4280 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:46:36.273065    4280 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:46:36.273151    4280 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:46:36.273156    4280 kubeadm.go:310] 
	I1003 20:46:36.273181    4280 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:46:36.273183    4280 kubeadm.go:310] 
	I1003 20:46:36.273287    4280 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:46:36.273295    4280 kubeadm.go:310] 
	I1003 20:46:36.273319    4280 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:46:36.273367    4280 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:46:36.273457    4280 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:46:36.273465    4280 kubeadm.go:310] 
	I1003 20:46:36.273503    4280 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:46:36.273559    4280 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:46:36.273564    4280 kubeadm.go:310] 
	I1003 20:46:36.273669    4280 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8gn5wk.xe0im0a4rkjxu2gw \
	I1003 20:46:36.273723    4280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e258f457da7d6d4c594fcb056b26e81a77e78e21226b0ed29090930db50fe5c6 \
	I1003 20:46:36.273734    4280 kubeadm.go:310] 	--control-plane 
	I1003 20:46:36.273737    4280 kubeadm.go:310] 
	I1003 20:46:36.273791    4280 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:46:36.273796    4280 kubeadm.go:310] 
	I1003 20:46:36.273835    4280 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8gn5wk.xe0im0a4rkjxu2gw \
	I1003 20:46:36.273904    4280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e258f457da7d6d4c594fcb056b26e81a77e78e21226b0ed29090930db50fe5c6 
	I1003 20:46:36.273974    4280 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:46:36.273986    4280 cni.go:84] Creating CNI manager for ""
	I1003 20:46:36.273995    4280 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:46:36.277788    4280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1003 20:46:36.284749    4280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1003 20:46:36.287762    4280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1003 20:46:36.293177    4280 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:46:36.293234    4280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:46:36.293236    4280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-902000 minikube.k8s.io/updated_at=2024_10_03T20_46_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=running-upgrade-902000 minikube.k8s.io/primary=true
	I1003 20:46:36.340995    4280 kubeadm.go:1113] duration metric: took 47.811375ms to wait for elevateKubeSystemPrivileges
	I1003 20:46:36.341012    4280 ops.go:34] apiserver oom_adj: -16
	I1003 20:46:36.341018    4280 kubeadm.go:394] duration metric: took 4m11.712015667s to StartCluster
	I1003 20:46:36.341033    4280 settings.go:142] acquiring lock: {Name:mkcb41cafeed9afeb88d9d6f184696173f92f60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:46:36.341133    4280 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:46:36.341551    4280 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/kubeconfig: {Name:mk3ee3e45466495ab1092989494e731c3b1eb95d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:46:36.341739    4280 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:46:36.341747    4280 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:46:36.341785    4280 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-902000"
	I1003 20:46:36.341792    4280 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-902000"
	W1003 20:46:36.341799    4280 addons.go:243] addon storage-provisioner should already be in state true
	I1003 20:46:36.341810    4280 host.go:66] Checking if "running-upgrade-902000" exists ...
	I1003 20:46:36.341811    4280 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-902000"
	I1003 20:46:36.341822    4280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-902000"
	I1003 20:46:36.341883    4280 config.go:182] Loaded profile config "running-upgrade-902000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:46:36.342102    4280 retry.go:31] will retry after 1.005842733s: connect: dial unix /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/running-upgrade-902000/monitor: connect: connection refused
	I1003 20:46:36.342751    4280 kapi.go:59] client config for running-upgrade-902000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1021c25d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:46:36.342869    4280 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-902000"
	W1003 20:46:36.342874    4280 addons.go:243] addon default-storageclass should already be in state true
	I1003 20:46:36.342880    4280 host.go:66] Checking if "running-upgrade-902000" exists ...
	I1003 20:46:36.343411    4280 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:46:36.343415    4280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:46:36.343420    4280 sshutil.go:53] new ssh client: &{IP:localhost Port:50248 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/running-upgrade-902000/id_rsa Username:docker}
	I1003 20:46:36.345725    4280 out.go:177] * Verifying Kubernetes components...
	I1003 20:46:36.353614    4280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:46:36.450294    4280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:46:36.455685    4280 api_server.go:52] waiting for apiserver process to appear ...
	I1003 20:46:36.455732    4280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:46:36.460573    4280 api_server.go:72] duration metric: took 118.821209ms to wait for apiserver process to appear ...
	I1003 20:46:36.460582    4280 api_server.go:88] waiting for apiserver healthz status ...
	I1003 20:46:36.460590    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:36.488074    4280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:46:36.790076    4280 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:46:36.790089    4280 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:46:37.354258    4280 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:46:37.357199    4280 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:46:37.357207    4280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:46:37.357216    4280 sshutil.go:53] new ssh client: &{IP:localhost Port:50248 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/running-upgrade-902000/id_rsa Username:docker}
	I1003 20:46:37.394536    4280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:46:41.462654    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:41.462677    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:46.462884    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:46.462944    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:51.463728    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:51.463757    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:56.464249    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:56.464307    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:01.465061    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:01.465117    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:06.466213    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:06.466254    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1003 20:47:06.792355    4280 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1003 20:47:06.796563    4280 out.go:177] * Enabled addons: storage-provisioner
	I1003 20:47:06.804482    4280 addons.go:510] duration metric: took 30.462723542s for enable addons: enabled=[storage-provisioner]
	I1003 20:47:11.467379    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:11.467428    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:16.469095    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:16.469158    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:21.471084    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:21.471107    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:26.473312    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:26.473357    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:31.475662    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:31.475708    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:36.477675    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:36.477793    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:36.493234    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:47:36.493304    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:36.503816    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:47:36.503887    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:36.515715    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:47:36.515797    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:36.527442    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:47:36.527518    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:36.538074    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:47:36.538151    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:36.548624    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:47:36.548697    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:36.563207    4280 logs.go:282] 0 containers: []
	W1003 20:47:36.563219    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:36.563314    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:36.573684    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:47:36.573700    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:47:36.573705    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:47:36.585241    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:36.585251    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:36.610011    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:47:36.610021    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:36.622499    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:36.622510    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:36.627688    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:47:36.627695    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:47:36.643068    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:47:36.643082    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:47:36.665300    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:47:36.665310    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:47:36.681636    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:47:36.681651    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:47:36.694372    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:47:36.694383    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:47:36.707283    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:47:36.707294    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:47:36.722697    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:36.722706    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:36.759261    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:36.759272    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:36.848828    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:47:36.848840    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:47:39.368380    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:44.370730    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:44.370966    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:44.393560    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:47:44.393671    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:44.409024    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:47:44.409114    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:44.421864    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:47:44.421943    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:44.432937    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:47:44.433014    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:44.443052    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:47:44.443130    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:44.453511    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:47:44.453575    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:44.463973    4280 logs.go:282] 0 containers: []
	W1003 20:47:44.463986    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:44.464049    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:44.474281    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:47:44.474297    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:44.474302    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:44.479445    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:47:44.479452    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:47:44.494820    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:47:44.494830    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:47:44.506330    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:47:44.506342    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:47:44.521407    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:47:44.521417    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:47:44.533015    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:47:44.533026    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:47:44.550303    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:47:44.550313    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:47:44.561507    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:44.561515    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:44.599828    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:44.599847    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:44.640262    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:47:44.640270    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:47:44.659765    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:47:44.659779    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:47:44.672589    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:44.672604    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:44.697526    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:47:44.697537    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:47.214611    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:52.216938    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:52.217202    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:52.231664    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:47:52.231741    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:52.243188    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:47:52.243266    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:52.256885    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:47:52.256960    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:52.268266    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:47:52.268341    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:52.279481    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:47:52.279552    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:52.293944    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:47:52.294016    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:52.304437    4280 logs.go:282] 0 containers: []
	W1003 20:47:52.304446    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:52.304506    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:52.314845    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:47:52.314860    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:47:52.314864    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:47:52.326428    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:52.326442    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:52.331401    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:52.331407    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:52.371948    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:47:52.371958    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:47:52.388042    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:47:52.388056    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:47:52.401813    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:47:52.401826    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:47:52.419794    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:52.419803    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:52.443273    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:47:52.443281    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:52.454469    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:52.454477    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:52.489064    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:47:52.489077    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:47:52.502212    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:47:52.502227    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:47:52.515033    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:47:52.515041    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:47:52.531285    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:47:52.531300    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:47:55.046151    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:00.048410    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:00.048627    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:00.064567    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:00.064668    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:00.076831    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:00.076911    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:00.088086    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:48:00.088167    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:00.102975    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:00.103047    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:00.113162    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:00.113242    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:00.126760    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:00.126835    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:00.137004    4280 logs.go:282] 0 containers: []
	W1003 20:48:00.137016    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:00.137078    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:00.147257    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:00.147272    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:00.147277    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:00.181656    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:00.181664    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:00.196119    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:00.196128    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:00.207838    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:00.207849    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:00.222405    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:00.222419    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:00.246237    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:00.246246    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:00.259337    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:00.259353    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:00.264087    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:00.264093    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:00.300464    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:00.300478    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:00.318480    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:00.318493    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:00.330638    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:00.330652    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:00.348300    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:00.348312    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:00.366423    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:00.366434    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:02.881999    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:07.884225    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:07.884359    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:07.896938    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:07.897025    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:07.911460    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:07.911537    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:07.922577    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:48:07.922658    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:07.933384    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:07.933461    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:07.944913    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:07.944993    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:07.955352    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:07.955433    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:07.965591    4280 logs.go:282] 0 containers: []
	W1003 20:48:07.965604    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:07.965667    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:07.975922    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:07.975938    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:07.975943    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:08.012193    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:08.012202    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:08.046764    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:08.046775    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:08.061180    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:08.061196    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:08.072969    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:08.072980    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:08.098027    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:08.098036    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:08.102636    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:08.102644    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:08.116996    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:08.117009    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:08.132953    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:08.132965    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:08.144717    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:08.144731    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:08.159059    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:08.159072    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:08.172067    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:08.172081    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:08.189578    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:08.189591    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:10.702925    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:15.705243    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:15.705421    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:15.718236    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:15.718321    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:15.728954    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:15.729029    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:15.739120    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:48:15.739198    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:15.749484    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:15.749559    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:15.759753    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:15.759830    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:15.770052    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:15.770129    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:15.780561    4280 logs.go:282] 0 containers: []
	W1003 20:48:15.780573    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:15.780642    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:15.791572    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:15.791587    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:15.791592    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:15.803190    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:15.803200    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:15.816187    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:15.816197    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:15.820863    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:15.820870    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:15.859584    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:15.859595    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:15.874127    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:15.874137    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:15.885630    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:15.885644    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:15.909016    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:15.909030    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:15.924236    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:15.924250    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:15.959778    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:15.959786    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:15.979254    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:15.979263    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:15.991436    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:15.991446    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:16.009321    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:16.009335    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:18.536179    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:23.538464    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:23.538698    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:23.555357    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:23.555456    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:23.570154    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:23.570235    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:23.581415    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:48:23.581481    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:23.591689    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:23.591757    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:23.601918    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:23.601990    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:23.612661    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:23.612737    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:23.627076    4280 logs.go:282] 0 containers: []
	W1003 20:48:23.627090    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:23.627148    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:23.637733    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:23.637752    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:23.637757    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:23.676108    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:23.676118    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:23.693310    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:23.693318    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:23.708384    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:23.708394    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:23.719825    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:23.719834    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:23.737269    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:23.737284    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:23.748308    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:23.748318    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:23.773086    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:23.773094    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:23.806836    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:23.806846    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:23.820436    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:23.820445    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:23.832115    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:23.832129    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:23.843604    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:23.843618    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:23.856866    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:23.856877    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:26.363153    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:31.364858    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:31.365108    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:31.382360    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:31.382459    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:31.395339    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:31.395422    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:31.406855    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:48:31.406928    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:31.417622    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:31.417691    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:31.428734    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:31.428816    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:31.438967    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:31.439046    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:31.449050    4280 logs.go:282] 0 containers: []
	W1003 20:48:31.449059    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:31.449119    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:31.459818    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:31.459833    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:31.459838    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:31.495992    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:31.496000    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:31.500726    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:31.500735    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:31.512696    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:31.512707    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:31.525115    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:31.525127    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:31.536633    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:31.536643    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:31.561127    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:31.561137    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:31.572158    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:31.572167    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:31.607953    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:31.607965    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:31.622418    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:31.622427    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:31.637010    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:31.637018    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:31.648906    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:31.648915    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:31.663532    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:31.663543    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:34.182892    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:39.185182    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:39.185387    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:39.203519    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:39.203618    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:39.217722    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:39.217802    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:39.229180    4280 logs.go:282] 3 containers: [6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:48:39.229258    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:39.239279    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:39.239354    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:39.249688    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:39.249759    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:39.262152    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:39.262228    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:39.272261    4280 logs.go:282] 0 containers: []
	W1003 20:48:39.272274    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:39.272342    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:39.282973    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:39.282993    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:39.282999    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:39.288102    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:39.288108    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:39.302357    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:39.302367    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:39.316539    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:39.316549    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:39.328669    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:39.328682    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:39.355120    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:39.355139    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:39.392126    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:39.392140    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:39.406058    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:39.406070    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:39.418154    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:39.418164    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:39.432944    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:39.432958    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:39.444979    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:39.444988    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:39.470440    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:39.470454    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:39.483302    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:39.483314    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:39.517680    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:48:39.517691    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:48:42.031077    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:47.032945    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:47.033358    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:47.062769    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:47.062911    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:47.080923    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:47.081036    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:47.095108    4280 logs.go:282] 3 containers: [6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:48:47.095192    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:47.106819    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:47.106889    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:47.117302    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:47.117370    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:47.128027    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:47.128102    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:47.137921    4280 logs.go:282] 0 containers: []
	W1003 20:48:47.137931    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:47.138000    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:47.148204    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:47.148222    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:47.148227    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:47.162873    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:47.162887    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:47.174848    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:47.174860    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:47.201288    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:47.201304    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:47.236553    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:48:47.236568    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:48:47.247610    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:47.247623    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:47.259303    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:47.259317    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:47.293992    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:47.294001    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:47.308942    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:47.308952    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:47.325933    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:47.325950    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:47.341374    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:47.341387    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:47.352934    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:47.352944    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:47.370558    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:47.370568    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:47.375508    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:47.375515    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:49.889900    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:54.890927    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:54.891121    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:54.905492    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:54.905573    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:54.916678    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:54.916748    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:54.928537    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:48:54.928613    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:54.938601    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:54.938672    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:54.949384    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:54.949460    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:54.959733    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:54.959805    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:54.970438    4280 logs.go:282] 0 containers: []
	W1003 20:48:54.970451    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:54.970512    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:54.985323    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:54.985340    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:54.985346    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:55.000191    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:55.000201    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:55.011795    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:48:55.011808    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:48:55.023566    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:55.023577    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:55.044292    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:55.044301    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:55.055799    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:55.055810    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:55.080897    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:55.080907    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:55.115232    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:55.115242    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:55.127516    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:55.127528    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:55.139458    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:55.139467    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:55.157608    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:55.157617    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:55.193393    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:55.193405    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:55.197807    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:55.197816    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:55.212971    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:48:55.212982    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:48:55.224975    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:55.224987    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:57.742587    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:02.745032    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:02.745467    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:02.776202    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:02.776346    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:02.794682    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:02.794790    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:02.808771    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:02.808864    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:02.820826    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:02.820903    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:02.831901    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:02.831987    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:02.842052    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:02.842126    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:02.852534    4280 logs.go:282] 0 containers: []
	W1003 20:49:02.852546    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:02.852614    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:02.864451    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:02.864470    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:02.864476    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:02.898487    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:02.898496    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:02.912947    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:02.912959    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:02.924742    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:02.924754    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:02.929431    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:02.929441    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:02.943797    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:02.943809    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:02.957577    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:02.957588    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:02.969474    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:02.969485    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:02.981194    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:02.981206    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:03.013813    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:03.013826    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:03.025721    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:03.025732    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:03.043293    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:03.043306    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:03.067119    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:03.067128    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:03.101772    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:03.101782    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:03.114074    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:03.114084    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:05.627686    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:10.628180    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:10.628413    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:10.643915    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:10.644009    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:10.656188    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:10.656266    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:10.667345    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:10.667423    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:10.678243    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:10.678314    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:10.692529    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:10.692600    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:10.702803    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:10.702873    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:10.713301    4280 logs.go:282] 0 containers: []
	W1003 20:49:10.713312    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:10.713372    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:10.723313    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:10.723330    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:10.723336    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:10.735488    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:10.735501    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:10.771645    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:10.771655    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:10.808068    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:10.808078    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:10.822281    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:10.822291    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:10.834283    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:10.834293    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:10.839305    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:10.839316    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:10.850530    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:10.850541    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:10.875970    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:10.875979    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:10.887238    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:10.887250    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:10.902227    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:10.902242    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:10.918197    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:10.918207    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:10.929622    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:10.929631    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:10.941998    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:10.942008    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:10.953618    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:10.953630    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:13.477426    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:18.479767    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:18.480046    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:18.505690    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:18.505805    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:18.522129    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:18.522217    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:18.537242    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:18.537319    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:18.549131    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:18.549211    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:18.559620    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:18.559686    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:18.579531    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:18.579606    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:18.589903    4280 logs.go:282] 0 containers: []
	W1003 20:49:18.589918    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:18.589983    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:18.599941    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:18.599978    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:18.599986    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:18.614546    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:18.614559    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:18.630304    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:18.630313    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:18.645896    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:18.645910    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:18.657719    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:18.657732    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:18.671847    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:18.671860    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:18.689541    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:18.689550    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:18.702178    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:18.702192    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:18.737081    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:18.737096    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:18.749537    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:18.749550    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:18.774264    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:18.774272    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:18.810100    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:18.810108    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:18.814784    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:18.814791    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:18.826611    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:18.826622    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:18.838177    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:18.838190    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:21.351569    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:26.352849    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:26.352977    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:26.364953    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:26.365037    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:26.375857    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:26.375931    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:26.386411    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:26.386481    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:26.397262    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:26.397339    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:26.407680    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:26.407750    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:26.418320    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:26.418391    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:26.428579    4280 logs.go:282] 0 containers: []
	W1003 20:49:26.428590    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:26.428657    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:26.438910    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:26.438931    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:26.438937    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:26.474871    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:26.474882    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:26.489326    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:26.489340    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:26.500950    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:26.500963    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:26.512913    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:26.512924    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:26.517894    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:26.517902    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:26.553102    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:26.553115    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:26.565602    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:26.565616    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:26.581905    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:26.581916    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:26.594441    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:26.594454    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:26.606393    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:26.606407    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:26.630824    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:26.630833    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:26.642669    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:26.642684    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:26.657215    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:26.657229    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:26.669231    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:26.669241    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:29.189025    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:34.191305    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:34.191415    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:34.202507    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:34.202595    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:34.214109    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:34.214189    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:34.224915    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:34.225000    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:34.241125    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:34.241201    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:34.256068    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:34.256144    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:34.266800    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:34.266876    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:34.277939    4280 logs.go:282] 0 containers: []
	W1003 20:49:34.277953    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:34.278014    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:34.288691    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:34.288707    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:34.288713    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:34.293151    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:34.293162    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:34.356910    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:34.356922    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:34.369820    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:34.369843    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:34.404335    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:34.404349    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:34.418554    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:34.418563    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:34.430149    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:34.430166    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:34.447810    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:34.447822    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:34.465456    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:34.465470    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:34.477173    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:34.477183    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:34.501674    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:34.501683    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:34.515751    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:34.515763    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:34.527536    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:34.527546    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:34.539896    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:34.539906    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:34.551833    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:34.551847    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:37.068577    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:42.070889    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:42.071037    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:42.083014    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:42.083087    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:42.093512    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:42.093581    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:42.103868    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:42.103943    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:42.114582    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:42.114644    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:42.129230    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:42.129309    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:42.144927    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:42.145004    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:42.155367    4280 logs.go:282] 0 containers: []
	W1003 20:49:42.155382    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:42.155437    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:42.165936    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:42.165956    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:42.165962    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:42.201342    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:42.201352    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:42.220374    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:42.220385    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:42.234712    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:42.234721    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:42.251467    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:42.251477    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:42.280985    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:42.280998    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:42.294052    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:42.294063    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:42.329185    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:42.329195    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:42.347826    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:42.347837    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:42.363362    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:42.363371    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:42.375079    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:42.375090    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:42.386678    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:42.386688    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:42.391767    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:42.391773    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:42.404101    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:42.404111    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:42.416034    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:42.416044    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:44.942951    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:49.945196    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:49.945339    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:49.957401    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:49.957484    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:49.968544    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:49.968616    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:49.985761    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:49.985838    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:49.996899    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:49.996976    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:50.007658    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:50.007730    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:50.018573    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:50.018648    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:50.029894    4280 logs.go:282] 0 containers: []
	W1003 20:49:50.029906    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:50.029977    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:50.041794    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:50.041813    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:50.041819    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:50.057113    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:50.057127    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:50.069128    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:50.069145    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:50.094295    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:50.094306    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:50.109503    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:50.109513    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:50.154016    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:50.154029    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:50.168403    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:50.168417    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:50.180873    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:50.180887    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:50.193205    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:50.193216    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:50.210397    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:50.210407    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:50.230671    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:50.230682    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:50.246012    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:50.246026    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:50.258328    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:50.258339    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:50.292674    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:50.292684    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:50.296978    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:50.296985    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:52.810706    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:57.812927    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:57.813064    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:57.827142    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:57.827240    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:57.839452    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:57.839526    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:57.850501    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:57.850581    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:57.865015    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:57.865096    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:57.876034    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:57.876109    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:57.887840    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:57.887908    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:57.898186    4280 logs.go:282] 0 containers: []
	W1003 20:49:57.898202    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:57.898267    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:57.909418    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:57.909434    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:57.909441    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:57.922084    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:57.922095    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:57.936187    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:57.936198    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:57.948451    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:57.948462    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:57.963231    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:57.963242    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:58.000842    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:58.000849    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:58.005253    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:58.005259    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:58.019907    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:58.019917    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:58.033135    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:58.033151    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:58.068926    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:58.068937    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:58.081078    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:58.081088    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:58.093441    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:58.093455    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:58.117848    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:58.117857    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:58.139826    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:58.139837    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:58.152633    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:58.152647    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:50:00.670664    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:05.672904    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:05.673107    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:50:05.687275    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:50:05.687365    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:50:05.698187    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:50:05.698268    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:50:05.709725    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:50:05.709811    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:50:05.719981    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:50:05.720060    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:50:05.730811    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:50:05.730887    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:50:05.742344    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:50:05.742421    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:50:05.752483    4280 logs.go:282] 0 containers: []
	W1003 20:50:05.752494    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:50:05.752560    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:50:05.762989    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:50:05.763011    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:50:05.763017    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:50:05.774975    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:50:05.774986    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:50:05.792245    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:50:05.792258    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:50:05.817018    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:50:05.817031    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:50:05.853951    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:50:05.853963    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:50:05.869603    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:50:05.869614    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:50:05.905316    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:50:05.905324    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:50:05.920261    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:50:05.920272    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:50:05.932187    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:50:05.932197    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:50:05.944576    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:50:05.944592    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:50:05.957109    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:50:05.957124    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:50:05.968960    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:50:05.968973    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:50:05.982898    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:50:05.982912    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:50:06.000165    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:50:06.000179    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:50:06.011981    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:50:06.011995    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:50:08.518839    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:13.521110    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:13.521237    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:50:13.538441    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:50:13.538516    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:50:13.553031    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:50:13.553121    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:50:13.564074    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:50:13.564156    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:50:13.574893    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:50:13.574983    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:50:13.586778    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:50:13.586882    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:50:13.597469    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:50:13.597550    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:50:13.607614    4280 logs.go:282] 0 containers: []
	W1003 20:50:13.607627    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:50:13.607682    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:50:13.618308    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:50:13.618326    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:50:13.618332    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:50:13.652164    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:50:13.652174    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:50:13.663574    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:50:13.663586    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:50:13.674825    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:50:13.674835    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:50:13.700267    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:50:13.700276    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:50:13.705153    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:50:13.705162    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:50:13.720112    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:50:13.720127    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:50:13.734361    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:50:13.734372    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:50:13.749157    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:50:13.749168    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:50:13.760952    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:50:13.760963    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:50:13.778764    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:50:13.778774    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:50:13.815325    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:50:13.815336    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:50:13.827318    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:50:13.827329    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:50:13.839762    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:50:13.839774    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:50:13.852099    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:50:13.852110    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:50:16.368165    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:21.370467    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:21.370692    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:50:21.388838    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:50:21.388936    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:50:21.402637    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:50:21.402714    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:50:21.414532    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:50:21.414606    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:50:21.429884    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:50:21.429960    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:50:21.440655    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:50:21.440730    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:50:21.450968    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:50:21.451038    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:50:21.462919    4280 logs.go:282] 0 containers: []
	W1003 20:50:21.462929    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:50:21.462995    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:50:21.473129    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:50:21.473145    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:50:21.473151    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:50:21.487827    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:50:21.487843    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:50:21.502856    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:50:21.502869    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:50:21.537388    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:50:21.537397    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:50:21.572376    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:50:21.572387    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:50:21.584842    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:50:21.584855    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:50:21.589341    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:50:21.589349    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:50:21.601162    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:50:21.601175    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:50:21.613652    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:50:21.613665    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:50:21.625277    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:50:21.625289    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:50:21.649839    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:50:21.649848    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:50:21.661803    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:50:21.661816    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:50:21.676266    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:50:21.676278    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:50:21.687981    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:50:21.687995    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:50:21.705571    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:50:21.705585    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:50:24.219563    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:29.221878    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:29.222040    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:50:29.235265    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:50:29.235355    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:50:29.246033    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:50:29.246113    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:50:29.257653    4280 logs.go:282] 4 containers: [05fd43da78d5 dbdc722f9f79 6f01bb70655f 0a2b0bd296a5]
	I1003 20:50:29.257722    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:50:29.272090    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:50:29.272169    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:50:29.282832    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:50:29.282908    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:50:29.293532    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:50:29.293604    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:50:29.310467    4280 logs.go:282] 0 containers: []
	W1003 20:50:29.310480    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:50:29.310551    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:50:29.320640    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:50:29.320659    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:50:29.320665    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:50:29.335366    4280 logs.go:123] Gathering logs for coredns [05fd43da78d5] ...
	I1003 20:50:29.335380    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05fd43da78d5"
	I1003 20:50:29.351021    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:50:29.351036    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:50:29.362698    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:50:29.362712    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:50:29.398734    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:50:29.398750    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:50:29.413408    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:50:29.413417    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:50:29.424995    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:50:29.425005    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:50:29.449564    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:50:29.449579    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:50:29.466023    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:50:29.466032    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:50:29.477380    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:50:29.477392    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:50:29.481735    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:50:29.481741    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:50:29.516333    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:50:29.516349    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:50:29.528629    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:50:29.528640    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:50:29.553546    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:50:29.553555    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:50:29.565779    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:50:29.565793    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:50:32.078642    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:37.079982    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:37.084693    4280 out.go:201] 
	W1003 20:50:37.088519    4280 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1003 20:50:37.088528    4280 out.go:270] * 
	* 
	W1003 20:50:37.089107    4280 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:50:37.099484    4280 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-902000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-03 20:50:37.188544 -0700 PDT m=+3789.908008043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-902000 -n running-upgrade-902000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-902000 -n running-upgrade-902000: exit status 2 (15.642924375s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-902000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-191000          | force-systemd-flag-191000 | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-492000              | force-systemd-env-492000  | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-492000           | force-systemd-env-492000  | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT | 03 Oct 24 20:40 PDT |
	| start   | -p docker-flags-166000                | docker-flags-166000       | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-191000             | force-systemd-flag-191000 | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-191000          | force-systemd-flag-191000 | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT | 03 Oct 24 20:40 PDT |
	| start   | -p cert-expiration-224000             | cert-expiration-224000    | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-166000 ssh               | docker-flags-166000       | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-166000 ssh               | docker-flags-166000       | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-166000                | docker-flags-166000       | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT | 03 Oct 24 20:40 PDT |
	| start   | -p cert-options-725000                | cert-options-725000       | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-725000 ssh               | cert-options-725000       | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-725000 -- sudo        | cert-options-725000       | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-725000                | cert-options-725000       | jenkins | v1.34.0 | 03 Oct 24 20:40 PDT | 03 Oct 24 20:40 PDT |
	| start   | -p running-upgrade-902000             | minikube                  | jenkins | v1.26.0 | 03 Oct 24 20:40 PDT | 03 Oct 24 20:42 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-902000             | running-upgrade-902000    | jenkins | v1.34.0 | 03 Oct 24 20:42 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-224000             | cert-expiration-224000    | jenkins | v1.34.0 | 03 Oct 24 20:43 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-224000             | cert-expiration-224000    | jenkins | v1.34.0 | 03 Oct 24 20:43 PDT | 03 Oct 24 20:43 PDT |
	| start   | -p kubernetes-upgrade-554000          | kubernetes-upgrade-554000 | jenkins | v1.34.0 | 03 Oct 24 20:43 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-554000          | kubernetes-upgrade-554000 | jenkins | v1.34.0 | 03 Oct 24 20:43 PDT | 03 Oct 24 20:44 PDT |
	| start   | -p kubernetes-upgrade-554000          | kubernetes-upgrade-554000 | jenkins | v1.34.0 | 03 Oct 24 20:44 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-554000          | kubernetes-upgrade-554000 | jenkins | v1.34.0 | 03 Oct 24 20:44 PDT | 03 Oct 24 20:44 PDT |
	| start   | -p stopped-upgrade-455000             | minikube                  | jenkins | v1.26.0 | 03 Oct 24 20:44 PDT | 03 Oct 24 20:44 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-455000 stop           | minikube                  | jenkins | v1.26.0 | 03 Oct 24 20:44 PDT | 03 Oct 24 20:45 PDT |
	| start   | -p stopped-upgrade-455000             | stopped-upgrade-455000    | jenkins | v1.34.0 | 03 Oct 24 20:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 20:45:09
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 20:45:09.560422    4416 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:45:09.560886    4416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:45:09.560890    4416 out.go:358] Setting ErrFile to fd 2...
	I1003 20:45:09.560892    4416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:45:09.561024    4416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:45:09.562350    4416 out.go:352] Setting JSON to false
	I1003 20:45:09.582857    4416 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4480,"bootTime":1728009029,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:45:09.582949    4416 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:45:09.586169    4416 out.go:177] * [stopped-upgrade-455000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:45:09.593234    4416 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:45:09.593374    4416 notify.go:220] Checking for updates...
	I1003 20:45:09.600196    4416 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:45:09.603236    4416 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:45:09.606174    4416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:45:09.609268    4416 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:45:09.612235    4416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:45:09.615507    4416 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:45:09.619172    4416 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1003 20:45:09.622141    4416 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:45:09.626169    4416 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:45:09.633142    4416 start.go:297] selected driver: qemu2
	I1003 20:45:09.633151    4416 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50502 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1003 20:45:09.633216    4416 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:45:09.635987    4416 cni.go:84] Creating CNI manager for ""
	I1003 20:45:09.636024    4416 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:45:09.636059    4416 start.go:340] cluster config:
	{Name:stopped-upgrade-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50502 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1003 20:45:09.636118    4416 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:45:09.640172    4416 out.go:177] * Starting "stopped-upgrade-455000" primary control-plane node in "stopped-upgrade-455000" cluster
	I1003 20:45:09.648179    4416 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1003 20:45:09.648221    4416 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1003 20:45:09.648235    4416 cache.go:56] Caching tarball of preloaded images
	I1003 20:45:09.648371    4416 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:45:09.648386    4416 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1003 20:45:09.648453    4416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/config.json ...
	I1003 20:45:09.648789    4416 start.go:360] acquireMachinesLock for stopped-upgrade-455000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:45:09.648836    4416 start.go:364] duration metric: took 39.459µs to acquireMachinesLock for "stopped-upgrade-455000"
	I1003 20:45:09.648845    4416 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:45:09.648850    4416 fix.go:54] fixHost starting: 
	I1003 20:45:09.648973    4416 fix.go:112] recreateIfNeeded on stopped-upgrade-455000: state=Stopped err=<nil>
	W1003 20:45:09.648984    4416 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:45:09.653209    4416 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-455000" ...
	I1003 20:45:09.349858    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:09.350429    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:09.394625    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:09.394783    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:09.415756    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:09.415861    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:09.430549    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:09.430620    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:09.443337    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:09.443420    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:09.453968    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:09.454037    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:09.468778    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:09.468847    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:09.478953    4280 logs.go:282] 0 containers: []
	W1003 20:45:09.478964    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:09.479032    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:09.489874    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:09.489895    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:09.489904    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:09.509455    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:09.509464    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:09.532987    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:09.532998    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:09.545555    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:09.545565    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:09.559155    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:09.559167    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:09.574171    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:09.574187    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:09.585711    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:09.585721    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:09.600648    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:09.600656    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:09.623744    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:09.623752    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:09.638546    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:09.638555    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:09.652496    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:09.652504    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:09.663910    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:09.663920    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:09.690393    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:09.690412    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:09.729877    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:09.729897    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:09.734607    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:09.734614    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:09.770677    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:09.770690    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:09.785593    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:09.785612    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:12.304959    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:09.661233    4416 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:45:09.661359    4416 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50467-:22,hostfwd=tcp::50468-:2376,hostname=stopped-upgrade-455000 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/disk.qcow2
	I1003 20:45:09.710127    4416 main.go:141] libmachine: STDOUT: 
	I1003 20:45:09.710149    4416 main.go:141] libmachine: STDERR: 
	I1003 20:45:09.710155    4416 main.go:141] libmachine: Waiting for VM to start (ssh -p 50467 docker@127.0.0.1)...
	I1003 20:45:17.307589    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:17.307783    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:17.327573    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:17.327658    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:17.338755    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:17.338840    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:17.352705    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:17.352770    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:17.367021    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:17.367094    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:17.378338    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:17.378396    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:17.391119    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:17.391182    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:17.405726    4280 logs.go:282] 0 containers: []
	W1003 20:45:17.405742    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:17.405804    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:17.415795    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:17.415813    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:17.415820    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:17.429817    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:17.429826    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:17.444854    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:17.444864    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:17.456762    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:17.456773    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:17.470833    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:17.470845    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:17.485158    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:17.485167    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:17.511109    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:17.511115    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:17.525765    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:17.525774    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:17.542253    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:17.542265    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:17.557411    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:17.557422    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:17.597194    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:17.597204    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:17.602132    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:17.602141    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:17.637465    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:17.637478    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:17.651582    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:17.651592    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:17.671636    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:17.671649    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:17.691055    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:17.691064    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:17.702581    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:17.702590    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:20.216743    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:25.219122    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:25.219677    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:25.260792    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:25.260955    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:25.282496    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:25.282622    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:25.300890    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:25.300973    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:25.313075    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:25.313160    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:25.323696    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:25.323781    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:25.338080    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:25.338166    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:25.365896    4280 logs.go:282] 0 containers: []
	W1003 20:45:25.365910    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:25.365989    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:25.385189    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:25.385209    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:25.385214    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:25.411622    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:25.411633    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:25.423047    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:25.423057    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:25.440651    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:25.440662    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:25.458754    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:25.458769    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:25.495003    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:25.495014    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:25.499333    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:25.499342    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:25.514600    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:25.514611    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:25.532876    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:25.532889    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:25.547372    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:25.547383    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:25.562791    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:25.562805    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:25.600363    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:25.600378    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:25.619682    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:25.619694    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:25.631574    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:25.631587    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:25.654009    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:25.654016    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:25.665920    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:25.665935    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:25.680701    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:25.680710    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:28.193459    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:29.884611    4416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/config.json ...
	I1003 20:45:29.885435    4416 machine.go:93] provisionDockerMachine start ...
	I1003 20:45:29.885601    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:29.886050    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:29.886066    4416 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:45:29.959639    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:45:29.959673    4416 buildroot.go:166] provisioning hostname "stopped-upgrade-455000"
	I1003 20:45:29.959805    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:29.960044    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:29.960056    4416 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-455000 && echo "stopped-upgrade-455000" | sudo tee /etc/hostname
	I1003 20:45:30.030260    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-455000
	
	I1003 20:45:30.030358    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:30.030556    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:30.030569    4416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-455000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-455000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-455000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:45:30.091204    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:45:30.091217    4416 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1040/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1040/.minikube}
	I1003 20:45:30.091233    4416 buildroot.go:174] setting up certificates
	I1003 20:45:30.091238    4416 provision.go:84] configureAuth start
	I1003 20:45:30.091245    4416 provision.go:143] copyHostCerts
	I1003 20:45:30.091324    4416 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem, removing ...
	I1003 20:45:30.091332    4416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem
	I1003 20:45:30.091446    4416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem (1078 bytes)
	I1003 20:45:30.091677    4416 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem, removing ...
	I1003 20:45:30.091681    4416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem
	I1003 20:45:30.091749    4416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem (1123 bytes)
	I1003 20:45:30.091892    4416 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem, removing ...
	I1003 20:45:30.091896    4416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem
	I1003 20:45:30.091964    4416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem (1675 bytes)
	I1003 20:45:30.092123    4416 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-455000 san=[127.0.0.1 localhost minikube stopped-upgrade-455000]
	I1003 20:45:30.193248    4416 provision.go:177] copyRemoteCerts
	I1003 20:45:30.193294    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:45:30.193301    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	I1003 20:45:30.221775    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1003 20:45:30.228945    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:45:30.235804    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1003 20:45:30.242280    4416 provision.go:87] duration metric: took 151.034708ms to configureAuth
	I1003 20:45:30.242288    4416 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:45:30.242387    4416 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:45:30.242428    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:30.242514    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:30.242519    4416 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:45:30.295150    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:45:30.295158    4416 buildroot.go:70] root file system type: tmpfs
	I1003 20:45:30.295205    4416 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:45:30.295253    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:30.295342    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:30.295375    4416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:45:30.352059    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:45:30.352133    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:30.352253    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:30.352261    4416 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:45:30.731203    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:45:30.731216    4416 machine.go:96] duration metric: took 845.770291ms to provisionDockerMachine
	I1003 20:45:30.731224    4416 start.go:293] postStartSetup for "stopped-upgrade-455000" (driver="qemu2")
	I1003 20:45:30.731230    4416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:45:30.731307    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:45:30.731316    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	I1003 20:45:30.761546    4416 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:45:30.762945    4416 info.go:137] Remote host: Buildroot 2021.02.12
	I1003 20:45:30.762950    4416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1040/.minikube/addons for local assets ...
	I1003 20:45:30.763023    4416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1040/.minikube/files for local assets ...
	I1003 20:45:30.763169    4416 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem -> 15562.pem in /etc/ssl/certs
	I1003 20:45:30.763327    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:45:30.766023    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem --> /etc/ssl/certs/15562.pem (1708 bytes)
	I1003 20:45:30.773525    4416 start.go:296] duration metric: took 42.295208ms for postStartSetup
	I1003 20:45:30.773541    4416 fix.go:56] duration metric: took 21.124690584s for fixHost
	I1003 20:45:30.773591    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:30.773696    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:30.773708    4416 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:45:30.825503    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728013531.303175838
	
	I1003 20:45:30.825514    4416 fix.go:216] guest clock: 1728013531.303175838
	I1003 20:45:30.825517    4416 fix.go:229] Guest: 2024-10-03 20:45:31.303175838 -0700 PDT Remote: 2024-10-03 20:45:30.773545 -0700 PDT m=+21.235994626 (delta=529.630838ms)
	I1003 20:45:30.825528    4416 fix.go:200] guest clock delta is within tolerance: 529.630838ms
	I1003 20:45:30.825530    4416 start.go:83] releasing machines lock for "stopped-upgrade-455000", held for 21.176687833s
	I1003 20:45:30.825598    4416 ssh_runner.go:195] Run: cat /version.json
	I1003 20:45:30.825607    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	I1003 20:45:30.825634    4416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:45:30.825694    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	W1003 20:45:30.826122    4416 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50467: connect: connection refused
	I1003 20:45:30.826145    4416 retry.go:31] will retry after 374.262735ms: dial tcp [::1]:50467: connect: connection refused
	W1003 20:45:31.257593    4416 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1003 20:45:31.257794    4416 ssh_runner.go:195] Run: systemctl --version
	I1003 20:45:31.262797    4416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:45:31.267025    4416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:45:31.267102    4416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1003 20:45:31.273367    4416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1003 20:45:31.282401    4416 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:45:31.282415    4416 start.go:495] detecting cgroup driver to use...
	I1003 20:45:31.282584    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:45:31.292993    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1003 20:45:31.298023    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:45:31.302252    4416 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:45:31.302296    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:45:31.306223    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:45:31.310193    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:45:31.313906    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:45:31.317440    4416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:45:31.320950    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:45:31.323966    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:45:31.327112    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:45:31.330352    4416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:45:31.333621    4416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:45:31.336603    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:31.418843    4416 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:45:31.425768    4416 start.go:495] detecting cgroup driver to use...
	I1003 20:45:31.425838    4416 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:45:31.433278    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:45:31.439771    4416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:45:31.445611    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:45:31.450333    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:45:31.454922    4416 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:45:31.486249    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:45:31.491550    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:45:31.497032    4416 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:45:31.498438    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:45:31.501560    4416 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1003 20:45:31.506951    4416 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:45:31.568854    4416 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:45:31.640875    4416 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:45:31.640944    4416 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:45:31.646227    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:31.709355    4416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:45:31.823933    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:45:31.828402    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:45:31.832791    4416 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:45:31.896559    4416 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:45:31.960997    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:32.028876    4416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:45:32.034876    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:45:32.039564    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:32.122107    4416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:45:32.160556    4416 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:45:32.160646    4416 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:45:32.162825    4416 start.go:563] Will wait 60s for crictl version
	I1003 20:45:32.162880    4416 ssh_runner.go:195] Run: which crictl
	I1003 20:45:32.164079    4416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:45:32.178882    4416 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1003 20:45:32.178954    4416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:45:32.196928    4416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:45:33.196222    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:33.196336    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:33.212864    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:33.212945    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:33.232212    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:33.232301    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:33.244607    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:33.244695    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:33.256145    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:33.256232    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:33.269177    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:33.269376    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:33.285192    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:33.285276    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:33.297042    4280 logs.go:282] 0 containers: []
	W1003 20:45:33.297055    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:33.297133    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:33.309370    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:33.309389    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:33.309395    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:33.325105    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:33.325118    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:33.338311    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:33.338325    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:33.351310    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:33.351324    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:33.393997    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:33.394009    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:33.408825    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:33.408838    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:33.423034    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:33.423051    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:33.460564    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:33.460586    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:33.483161    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:33.483175    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:33.500714    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:33.500733    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:33.522773    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:33.522787    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:33.542259    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:33.542273    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:33.556542    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:33.556557    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:32.218522    4416 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1003 20:45:32.218666    4416 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1003 20:45:32.219930    4416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:45:32.223357    4416 kubeadm.go:883] updating cluster {Name:stopped-upgrade-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50502 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1003 20:45:32.223408    4416 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1003 20:45:32.223455    4416 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:45:32.233691    4416 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:45:32.233699    4416 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1003 20:45:32.233756    4416 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:45:32.237473    4416 ssh_runner.go:195] Run: which lz4
	I1003 20:45:32.238842    4416 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:45:32.240125    4416 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:45:32.240135    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1003 20:45:33.158320    4416 docker.go:649] duration metric: took 919.516208ms to copy over tarball
	I1003 20:45:33.158391    4416 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:45:34.356798    4416 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.198394208s)
	I1003 20:45:34.356813    4416 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:45:34.372116    4416 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:45:34.374970    4416 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1003 20:45:34.380001    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:34.461229    4416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:45:33.582856    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:33.582881    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:33.587763    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:33.587776    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:33.600606    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:33.600617    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:33.613253    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:33.613266    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:36.133303    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:36.024123    4416 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.562877125s)
	I1003 20:45:36.024219    4416 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:45:36.035277    4416 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:45:36.035288    4416 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1003 20:45:36.035293    4416 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1003 20:45:36.039839    4416 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:45:36.041189    4416 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:45:36.042998    4416 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:45:36.044706    4416 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:45:36.046645    4416 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:45:36.047135    4416 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:45:36.048381    4416 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:45:36.049257    4416 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:45:36.050453    4416 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1003 20:45:36.050528    4416 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:45:36.051738    4416 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:45:36.052095    4416 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1003 20:45:36.053041    4416 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1003 20:45:36.053136    4416 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:45:36.054320    4416 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1003 20:45:36.055056    4416 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:45:38.027624    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:45:38.066043    4416 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1003 20:45:38.066100    4416 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:45:38.066233    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:45:38.087737    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1003 20:45:38.136884    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1003 20:45:38.153705    4416 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1003 20:45:38.153734    4416 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1003 20:45:38.153817    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1003 20:45:38.167906    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1003 20:45:38.169310    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:45:38.181043    4416 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1003 20:45:38.181066    4416 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:45:38.181132    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:45:38.183885    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:45:38.191797    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1003 20:45:38.201459    4416 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1003 20:45:38.201481    4416 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:45:38.201538    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:45:38.211198    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W1003 20:45:38.484165    4416 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1003 20:45:38.484448    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:45:38.503987    4416 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1003 20:45:38.504017    4416 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:45:38.504100    4416 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:45:38.522007    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1003 20:45:38.522166    4416 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1003 20:45:38.523972    4416 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1003 20:45:38.523984    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1003 20:45:38.554514    4416 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1003 20:45:38.554528    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1003 20:45:38.651235    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:45:38.676346    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W1003 20:45:38.684758    4416 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1003 20:45:38.684909    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:45:38.798608    4416 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1003 20:45:38.798641    4416 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1003 20:45:38.798656    4416 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1003 20:45:38.798671    4416 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1003 20:45:38.798671    4416 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:45:38.798702    4416 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1003 20:45:38.798715    4416 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:45:38.798735    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:45:38.798735    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1003 20:45:38.798756    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:45:38.815383    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1003 20:45:38.815528    4416 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1003 20:45:38.815808    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1003 20:45:38.815862    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1003 20:45:38.815882    4416 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1003 20:45:38.817025    4416 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1003 20:45:38.817041    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1003 20:45:38.817541    4416 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1003 20:45:38.817555    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1003 20:45:38.830792    4416 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1003 20:45:38.830804    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1003 20:45:38.880680    4416 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1003 20:45:38.883296    4416 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1003 20:45:38.883305    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1003 20:45:38.920498    4416 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1003 20:45:38.920546    4416 cache_images.go:92] duration metric: took 2.885245333s to LoadCachedImages
	W1003 20:45:38.920589    4416 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1003 20:45:38.920594    4416 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1003 20:45:38.920649    4416 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-455000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:45:38.920723    4416 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:45:38.934185    4416 cni.go:84] Creating CNI manager for ""
	I1003 20:45:38.934196    4416 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:45:38.934202    4416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:45:38.934213    4416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-455000 NodeName:stopped-upgrade-455000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:45:38.934276    4416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-455000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:45:38.934346    4416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1003 20:45:38.937910    4416 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:45:38.937949    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 20:45:38.941039    4416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1003 20:45:38.946176    4416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:45:38.951484    4416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1003 20:45:38.956795    4416 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1003 20:45:38.958069    4416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:45:38.962087    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:39.045064    4416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:45:39.052384    4416 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000 for IP: 10.0.2.15
	I1003 20:45:39.052395    4416 certs.go:194] generating shared ca certs ...
	I1003 20:45:39.052403    4416 certs.go:226] acquiring lock for ca certs: {Name:mke7121fb3a343b392a0b01a3f973157c3dad296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:45:39.052588    4416 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.key
	I1003 20:45:39.052653    4416 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.key
	I1003 20:45:39.052658    4416 certs.go:256] generating profile certs ...
	I1003 20:45:39.052764    4416 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/client.key
	I1003 20:45:39.052783    4416 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key.849a58cc
	I1003 20:45:39.052796    4416 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt.849a58cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1003 20:45:39.201855    4416 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt.849a58cc ...
	I1003 20:45:39.201868    4416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt.849a58cc: {Name:mk510a964a5e41d0d17a2fd442229e0d87401b0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:45:39.202421    4416 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key.849a58cc ...
	I1003 20:45:39.202428    4416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key.849a58cc: {Name:mkb4398dc0c7ea2a578faad784730f0ad0f2647c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:45:39.202609    4416 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt.849a58cc -> /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt
	I1003 20:45:39.202756    4416 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key.849a58cc -> /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key
	I1003 20:45:39.202943    4416 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/proxy-client.key
	I1003 20:45:39.203100    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556.pem (1338 bytes)
	W1003 20:45:39.203134    4416 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556_empty.pem, impossibly tiny 0 bytes
	I1003 20:45:39.203140    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:45:39.203162    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem (1078 bytes)
	I1003 20:45:39.203184    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:45:39.203200    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem (1675 bytes)
	I1003 20:45:39.203241    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem (1708 bytes)
	I1003 20:45:39.203561    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:45:39.210413    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 20:45:39.217749    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:45:39.224961    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:45:39.231743    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 20:45:39.238413    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 20:45:39.245667    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:45:39.252979    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 20:45:39.259705    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556.pem --> /usr/share/ca-certificates/1556.pem (1338 bytes)
	I1003 20:45:39.266471    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem --> /usr/share/ca-certificates/15562.pem (1708 bytes)
	I1003 20:45:39.273736    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:45:39.280737    4416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:45:39.285937    4416 ssh_runner.go:195] Run: openssl version
	I1003 20:45:39.287944    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1556.pem && ln -fs /usr/share/ca-certificates/1556.pem /etc/ssl/certs/1556.pem"
	I1003 20:45:39.290823    4416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1556.pem
	I1003 20:45:39.292179    4416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:05 /usr/share/ca-certificates/1556.pem
	I1003 20:45:39.292205    4416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1556.pem
	I1003 20:45:39.293974    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1556.pem /etc/ssl/certs/51391683.0"
	I1003 20:45:39.297469    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15562.pem && ln -fs /usr/share/ca-certificates/15562.pem /etc/ssl/certs/15562.pem"
	I1003 20:45:39.300788    4416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15562.pem
	I1003 20:45:39.302330    4416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:05 /usr/share/ca-certificates/15562.pem
	I1003 20:45:39.302351    4416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15562.pem
	I1003 20:45:39.304284    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15562.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:45:39.307326    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:45:39.310601    4416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:45:39.312052    4416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:45:39.312075    4416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:45:39.313704    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:45:39.316934    4416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:45:39.318211    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 20:45:39.320104    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 20:45:39.321938    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 20:45:39.323944    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 20:45:39.325714    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 20:45:39.327526    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 20:45:39.329223    4416 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50502 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1003 20:45:39.329297    4416 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:45:39.339944    4416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:45:39.342951    4416 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1003 20:45:39.342957    4416 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1003 20:45:39.342990    4416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 20:45:39.346886    4416 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:45:39.347185    4416 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-455000" does not appear in /Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:45:39.347280    4416 kubeconfig.go:62] /Users/jenkins/minikube-integration/19546-1040/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-455000" cluster setting kubeconfig missing "stopped-upgrade-455000" context setting]
	I1003 20:45:39.347505    4416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/kubeconfig: {Name:mk3ee3e45466495ab1092989494e731c3b1eb95d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:45:39.347957    4416 kapi.go:59] client config for stopped-upgrade-455000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105c765d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:45:39.348314    4416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 20:45:39.351099    4416 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-455000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1003 20:45:39.351104    4416 kubeadm.go:1160] stopping kube-system containers ...
	I1003 20:45:39.351149    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:45:39.362141    4416 docker.go:483] Stopping containers: [38d603088dfa 61ff45fab245 ce9918a775c3 71c3a5cbd990 ca8f96da5995 f022ceefb216 86798697ade1 77f0409843de]
	I1003 20:45:39.362206    4416 ssh_runner.go:195] Run: docker stop 38d603088dfa 61ff45fab245 ce9918a775c3 71c3a5cbd990 ca8f96da5995 f022ceefb216 86798697ade1 77f0409843de
	I1003 20:45:39.372905    4416 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1003 20:45:39.378324    4416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:45:39.381818    4416 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:45:39.381827    4416 kubeadm.go:157] found existing configuration files:
	
	I1003 20:45:39.381870    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/admin.conf
	I1003 20:45:39.385217    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:45:39.385260    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:45:39.388129    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/kubelet.conf
	I1003 20:45:39.390813    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:45:39.390853    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:45:39.393826    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/controller-manager.conf
	I1003 20:45:39.396757    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:45:39.396788    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:45:39.399260    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/scheduler.conf
	I1003 20:45:39.401909    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:45:39.401942    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:45:39.404849    4416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:45:39.407480    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:45:39.428879    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:45:41.134134    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:41.134297    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:41.145772    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:41.145854    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:41.160779    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:41.160860    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:41.171767    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:41.171823    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:41.182878    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:41.182954    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:41.197087    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:41.197159    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:41.209013    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:41.209094    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:41.220536    4280 logs.go:282] 0 containers: []
	W1003 20:45:41.220549    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:41.220615    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:41.231647    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:41.231665    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:41.231670    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:41.271686    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:41.271701    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:41.310932    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:41.310945    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:41.325555    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:41.325567    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:41.344983    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:41.344994    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:41.362441    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:41.362453    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:41.374545    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:41.374557    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:41.396821    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:41.396840    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:41.412336    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:41.412352    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:41.434235    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:41.434246    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:41.452124    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:41.452136    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:41.470140    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:41.470151    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:41.494199    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:41.494208    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:41.498901    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:41.498910    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:41.514263    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:41.514276    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:41.526566    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:41.526579    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:41.541185    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:41.541200    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:39.994146    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:45:40.125970    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:45:40.147466    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:45:40.171032    4416 api_server.go:52] waiting for apiserver process to appear ...
	I1003 20:45:40.171121    4416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:45:40.673243    4416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:45:41.171650    4416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:45:41.176171    4416 api_server.go:72] duration metric: took 1.00513775s to wait for apiserver process to appear ...
	I1003 20:45:41.176184    4416 api_server.go:88] waiting for apiserver healthz status ...
	I1003 20:45:41.176199    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:44.056208    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:46.178309    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:46.178364    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:49.058454    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:49.058687    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:49.081340    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:49.081456    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:49.096924    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:49.097018    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:49.110996    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:49.111080    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:49.122042    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:49.122121    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:49.132524    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:49.132603    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:49.143148    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:49.143229    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:49.153434    4280 logs.go:282] 0 containers: []
	W1003 20:45:49.153451    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:49.153520    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:49.164326    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:49.164345    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:49.164350    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:49.178617    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:49.178628    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:49.183186    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:49.183192    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:49.197461    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:49.197471    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:49.208776    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:49.208786    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:49.222256    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:49.222266    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:49.234154    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:49.234170    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:49.249025    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:49.249035    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:49.265608    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:49.265618    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:49.277142    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:49.277153    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:49.289703    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:49.289715    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:49.302956    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:49.302968    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:49.325598    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:49.325605    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:49.362114    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:49.362123    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:49.399966    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:49.399981    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:49.421946    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:49.421960    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:49.436630    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:49.436643    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:51.949475    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:51.178832    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:51.178855    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:56.951782    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:56.951970    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:45:56.967802    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:45:56.967893    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:45:56.979881    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:45:56.979960    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:45:56.990839    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:45:56.990917    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:45:57.002144    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:45:57.002226    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:45:57.016011    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:45:57.016088    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:45:57.031373    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:45:57.031447    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:45:57.041432    4280 logs.go:282] 0 containers: []
	W1003 20:45:57.041447    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:45:57.041504    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:45:57.055714    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:45:57.055731    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:45:57.055737    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:45:57.070133    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:45:57.070143    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:45:57.083789    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:45:57.083800    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:45:57.110697    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:45:57.110707    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:45:57.123238    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:45:57.123248    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:45:57.138036    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:45:57.138046    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:45:57.152665    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:45:57.152676    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:45:57.164948    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:45:57.164958    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:45:57.202265    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:45:57.202281    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:45:57.239746    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:45:57.239756    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:45:57.258918    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:45:57.258929    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:45:57.274251    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:45:57.274263    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:45:57.285826    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:45:57.285837    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:45:57.309803    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:45:57.309810    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:45:57.314587    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:45:57.314592    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:45:57.325858    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:45:57.325868    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:45:57.343595    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:45:57.343608    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:45:56.179244    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:56.179321    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:59.859926    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:01.180083    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:01.180108    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:04.862220    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:04.862481    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:04.882893    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:46:04.883006    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:04.897205    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:46:04.897299    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:04.909543    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:46:04.909629    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:04.920380    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:46:04.920458    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:04.930909    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:46:04.930988    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:04.941883    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:46:04.941963    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:04.952051    4280 logs.go:282] 0 containers: []
	W1003 20:46:04.952061    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:04.952129    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:04.962921    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:46:04.962941    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:46:04.962947    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:46:04.979816    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:46:04.979831    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:46:04.990924    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:04.990933    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:05.024992    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:46:05.025008    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:46:05.039645    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:46:05.039655    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:46:05.063205    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:46:05.063215    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:46:05.080826    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:46:05.080836    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:46:05.092155    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:05.092166    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:05.114522    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:05.114530    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:05.151096    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:05.151104    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:05.155502    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:46:05.155512    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:46:05.173136    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:46:05.173145    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:46:05.190503    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:46:05.190516    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:46:05.212912    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:46:05.212922    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:05.224905    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:46:05.224921    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:46:05.239117    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:46:05.239128    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:46:05.251873    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:46:05.251883    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:46:07.765146    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:06.180910    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:06.181003    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:12.767860    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:12.768059    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:12.787427    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:46:12.787540    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:12.801960    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:46:12.802050    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:12.814061    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:46:12.814139    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:12.825172    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:46:12.825253    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:12.836298    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:46:12.836378    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:12.847336    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:46:12.847419    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:12.857017    4280 logs.go:282] 0 containers: []
	W1003 20:46:12.857030    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:12.857095    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:12.867601    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:46:12.867624    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:46:12.867629    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:46:12.881664    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:46:12.881674    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:46:12.892700    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:46:12.892714    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:12.905560    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:12.905575    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:12.910622    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:46:12.910631    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:46:12.925007    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:46:12.925016    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:46:12.938621    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:46:12.938631    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:46:12.953722    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:46:12.953744    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:46:12.965032    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:12.965043    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:12.987207    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:12.987214    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:13.022456    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:46:13.022471    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:46:13.041840    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:46:13.041854    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:46:13.055076    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:46:13.055085    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:46:13.072959    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:46:13.072970    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:46:13.086003    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:13.086015    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:13.124483    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:46:13.124499    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:46:13.139028    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:46:13.139039    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:46:11.182402    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:11.182430    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:15.653804    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:16.183781    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:16.183833    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:20.656172    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:20.656461    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:20.678962    4280 logs.go:282] 2 containers: [6f2196a8d53f c21a6a4f15b9]
	I1003 20:46:20.679100    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:20.693990    4280 logs.go:282] 2 containers: [2883442079a9 fbfb303c2ba7]
	I1003 20:46:20.694078    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:20.706028    4280 logs.go:282] 1 containers: [4e57018f73a8]
	I1003 20:46:20.706108    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:20.722746    4280 logs.go:282] 2 containers: [0bf89618f010 d495a53ce56f]
	I1003 20:46:20.722826    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:20.734109    4280 logs.go:282] 1 containers: [a821b2447501]
	I1003 20:46:20.734190    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:20.744902    4280 logs.go:282] 2 containers: [11afdc52bd14 19ed3440f6a0]
	I1003 20:46:20.744978    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:20.754938    4280 logs.go:282] 0 containers: []
	W1003 20:46:20.754949    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:20.755013    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:20.765592    4280 logs.go:282] 2 containers: [b18393276679 1e8dabb5d75d]
	I1003 20:46:20.765610    4280 logs.go:123] Gathering logs for kube-controller-manager [11afdc52bd14] ...
	I1003 20:46:20.765615    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11afdc52bd14"
	I1003 20:46:20.784279    4280 logs.go:123] Gathering logs for storage-provisioner [1e8dabb5d75d] ...
	I1003 20:46:20.784289    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e8dabb5d75d"
	I1003 20:46:20.796200    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:20.796210    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:20.832162    4280 logs.go:123] Gathering logs for kube-apiserver [c21a6a4f15b9] ...
	I1003 20:46:20.832171    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c21a6a4f15b9"
	I1003 20:46:20.851583    4280 logs.go:123] Gathering logs for etcd [fbfb303c2ba7] ...
	I1003 20:46:20.851594    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbfb303c2ba7"
	I1003 20:46:20.866905    4280 logs.go:123] Gathering logs for kube-scheduler [d495a53ce56f] ...
	I1003 20:46:20.866916    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d495a53ce56f"
	I1003 20:46:20.881698    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:46:20.881707    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:20.911414    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:20.911425    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:20.915640    4280 logs.go:123] Gathering logs for kube-apiserver [6f2196a8d53f] ...
	I1003 20:46:20.915649    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f2196a8d53f"
	I1003 20:46:20.932755    4280 logs.go:123] Gathering logs for coredns [4e57018f73a8] ...
	I1003 20:46:20.932765    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e57018f73a8"
	I1003 20:46:20.944001    4280 logs.go:123] Gathering logs for kube-controller-manager [19ed3440f6a0] ...
	I1003 20:46:20.944015    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19ed3440f6a0"
	I1003 20:46:20.956714    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:20.956725    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:20.992156    4280 logs.go:123] Gathering logs for etcd [2883442079a9] ...
	I1003 20:46:20.992171    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2883442079a9"
	I1003 20:46:21.006605    4280 logs.go:123] Gathering logs for storage-provisioner [b18393276679] ...
	I1003 20:46:21.006615    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b18393276679"
	I1003 20:46:21.018893    4280 logs.go:123] Gathering logs for kube-scheduler [0bf89618f010] ...
	I1003 20:46:21.018904    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bf89618f010"
	I1003 20:46:21.034980    4280 logs.go:123] Gathering logs for kube-proxy [a821b2447501] ...
	I1003 20:46:21.034990    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a821b2447501"
	I1003 20:46:21.046981    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:21.046990    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:23.570465    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:21.185302    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:21.185333    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:28.572823    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:28.572930    4280 kubeadm.go:597] duration metric: took 4m3.930379667s to restartPrimaryControlPlane
	W1003 20:46:28.572999    4280 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1003 20:46:28.573028    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1003 20:46:26.185979    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:26.186022    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:29.554229    4280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:46:29.559382    4280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:46:29.562362    4280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:46:29.565318    4280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:46:29.565323    4280 kubeadm.go:157] found existing configuration files:
	
	I1003 20:46:29.565356    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/admin.conf
	I1003 20:46:29.567692    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:46:29.567720    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:46:29.570277    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/kubelet.conf
	I1003 20:46:29.573008    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:46:29.573037    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:46:29.575467    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/controller-manager.conf
	I1003 20:46:29.578356    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:46:29.578391    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:46:29.581643    4280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/scheduler.conf
	I1003 20:46:29.584193    4280 kubeadm.go:163] "https://control-plane.minikube.internal:50280" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50280 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:46:29.584223    4280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:46:29.586689    4280 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:46:29.603999    4280 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1003 20:46:29.604128    4280 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:46:29.650960    4280 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:46:29.651050    4280 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:46:29.651103    4280 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 20:46:29.699694    4280 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:46:29.703905    4280 out.go:235]   - Generating certificates and keys ...
	I1003 20:46:29.703942    4280 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:46:29.703979    4280 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:46:29.704031    4280 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 20:46:29.704108    4280 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1003 20:46:29.704205    4280 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 20:46:29.704256    4280 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1003 20:46:29.704307    4280 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1003 20:46:29.704369    4280 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1003 20:46:29.704468    4280 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 20:46:29.707648    4280 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 20:46:29.707667    4280 kubeadm.go:310] [certs] Using the existing "sa" key
	I1003 20:46:29.707710    4280 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:46:29.781296    4280 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:46:29.965117    4280 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:46:30.101627    4280 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:46:30.194647    4280 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:46:30.226473    4280 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:46:30.226920    4280 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:46:30.226955    4280 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:46:30.320793    4280 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:46:30.324703    4280 out.go:235]   - Booting up control plane ...
	I1003 20:46:30.324877    4280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:46:30.324960    4280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:46:30.325059    4280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:46:30.325119    4280 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:46:30.325214    4280 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 20:46:31.188316    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:31.188337    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:34.824780    4280 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502876 seconds
	I1003 20:46:34.824897    4280 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:46:34.847467    4280 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:46:35.360837    4280 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:46:35.361014    4280 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-902000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:46:35.867072    4280 kubeadm.go:310] [bootstrap-token] Using token: 8gn5wk.xe0im0a4rkjxu2gw
	I1003 20:46:35.873791    4280 out.go:235]   - Configuring RBAC rules ...
	I1003 20:46:35.873878    4280 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:46:35.873955    4280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:46:35.880614    4280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:46:35.881917    4280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:46:35.883093    4280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:46:35.884569    4280 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:46:35.888797    4280 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:46:36.035394    4280 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:46:36.272538    4280 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:46:36.272918    4280 kubeadm.go:310] 
	I1003 20:46:36.272953    4280 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:46:36.272956    4280 kubeadm.go:310] 
	I1003 20:46:36.273011    4280 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:46:36.273015    4280 kubeadm.go:310] 
	I1003 20:46:36.273028    4280 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:46:36.273065    4280 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:46:36.273151    4280 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:46:36.273156    4280 kubeadm.go:310] 
	I1003 20:46:36.273181    4280 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:46:36.273183    4280 kubeadm.go:310] 
	I1003 20:46:36.273287    4280 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:46:36.273295    4280 kubeadm.go:310] 
	I1003 20:46:36.273319    4280 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:46:36.273367    4280 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:46:36.273457    4280 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:46:36.273465    4280 kubeadm.go:310] 
	I1003 20:46:36.273503    4280 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:46:36.273559    4280 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:46:36.273564    4280 kubeadm.go:310] 
	I1003 20:46:36.273669    4280 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8gn5wk.xe0im0a4rkjxu2gw \
	I1003 20:46:36.273723    4280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e258f457da7d6d4c594fcb056b26e81a77e78e21226b0ed29090930db50fe5c6 \
	I1003 20:46:36.273734    4280 kubeadm.go:310] 	--control-plane 
	I1003 20:46:36.273737    4280 kubeadm.go:310] 
	I1003 20:46:36.273791    4280 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:46:36.273796    4280 kubeadm.go:310] 
	I1003 20:46:36.273835    4280 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8gn5wk.xe0im0a4rkjxu2gw \
	I1003 20:46:36.273904    4280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e258f457da7d6d4c594fcb056b26e81a77e78e21226b0ed29090930db50fe5c6 
	I1003 20:46:36.273974    4280 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:46:36.273986    4280 cni.go:84] Creating CNI manager for ""
	I1003 20:46:36.273995    4280 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:46:36.277788    4280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1003 20:46:36.284749    4280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1003 20:46:36.287762    4280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1003 20:46:36.293177    4280 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:46:36.293234    4280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:46:36.293236    4280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-902000 minikube.k8s.io/updated_at=2024_10_03T20_46_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=running-upgrade-902000 minikube.k8s.io/primary=true
	I1003 20:46:36.340995    4280 kubeadm.go:1113] duration metric: took 47.811375ms to wait for elevateKubeSystemPrivileges
	I1003 20:46:36.341012    4280 ops.go:34] apiserver oom_adj: -16
	I1003 20:46:36.341018    4280 kubeadm.go:394] duration metric: took 4m11.712015667s to StartCluster
	I1003 20:46:36.341033    4280 settings.go:142] acquiring lock: {Name:mkcb41cafeed9afeb88d9d6f184696173f92f60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:46:36.341133    4280 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:46:36.341551    4280 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/kubeconfig: {Name:mk3ee3e45466495ab1092989494e731c3b1eb95d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:46:36.341739    4280 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:46:36.341747    4280 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:46:36.341785    4280 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-902000"
	I1003 20:46:36.341792    4280 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-902000"
	W1003 20:46:36.341799    4280 addons.go:243] addon storage-provisioner should already be in state true
	I1003 20:46:36.341810    4280 host.go:66] Checking if "running-upgrade-902000" exists ...
	I1003 20:46:36.341811    4280 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-902000"
	I1003 20:46:36.341822    4280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-902000"
	I1003 20:46:36.341883    4280 config.go:182] Loaded profile config "running-upgrade-902000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:46:36.342102    4280 retry.go:31] will retry after 1.005842733s: connect: dial unix /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/running-upgrade-902000/monitor: connect: connection refused
	I1003 20:46:36.342751    4280 kapi.go:59] client config for running-upgrade-902000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/running-upgrade-902000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1021c25d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:46:36.342869    4280 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-902000"
	W1003 20:46:36.342874    4280 addons.go:243] addon default-storageclass should already be in state true
	I1003 20:46:36.342880    4280 host.go:66] Checking if "running-upgrade-902000" exists ...
	I1003 20:46:36.343411    4280 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:46:36.343415    4280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:46:36.343420    4280 sshutil.go:53] new ssh client: &{IP:localhost Port:50248 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/running-upgrade-902000/id_rsa Username:docker}
	I1003 20:46:36.345725    4280 out.go:177] * Verifying Kubernetes components...
	I1003 20:46:36.353614    4280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:46:36.450294    4280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:46:36.455685    4280 api_server.go:52] waiting for apiserver process to appear ...
	I1003 20:46:36.455732    4280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:46:36.460573    4280 api_server.go:72] duration metric: took 118.821209ms to wait for apiserver process to appear ...
	I1003 20:46:36.460582    4280 api_server.go:88] waiting for apiserver healthz status ...
	I1003 20:46:36.460590    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:36.488074    4280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:46:36.790076    4280 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:46:36.790089    4280 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:46:37.354258    4280 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:46:37.357199    4280 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:46:37.357207    4280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:46:37.357216    4280 sshutil.go:53] new ssh client: &{IP:localhost Port:50248 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/running-upgrade-902000/id_rsa Username:docker}
	I1003 20:46:37.394536    4280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:46:36.190506    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:36.190531    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:41.462654    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:41.462677    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:41.192803    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:41.193064    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:41.212035    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:46:41.212138    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:41.226068    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:46:41.226166    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:41.237874    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:46:41.237963    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:41.248369    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:46:41.248442    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:41.258608    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:46:41.258677    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:41.269029    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:46:41.269108    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:41.280111    4416 logs.go:282] 0 containers: []
	W1003 20:46:41.280121    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:41.280187    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:41.290614    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:46:41.290634    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:41.290640    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:41.370920    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:46:41.370935    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:46:41.386154    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:46:41.386164    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:46:41.399898    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:46:41.399907    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:46:41.412323    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:46:41.412333    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:46:41.457762    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:46:41.457772    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:46:41.471109    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:46:41.471122    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:46:41.486283    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:41.486297    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:41.513498    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:41.513506    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:41.551917    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:41.551924    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:41.556050    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:46:41.556056    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:46:41.572491    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:46:41.572505    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:46:41.585033    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:46:41.585043    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:41.600990    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:46:41.601001    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:46:41.612551    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:46:41.612562    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:46:41.624380    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:46:41.624395    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:46:44.144247    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:46.462884    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:46.462944    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:49.146481    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:49.146660    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:49.160057    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:46:49.160142    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:49.172175    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:46:49.172261    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:49.183592    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:46:49.183671    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:49.194179    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:46:49.194262    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:49.205707    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:46:49.205788    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:49.216451    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:46:49.216525    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:49.228484    4416 logs.go:282] 0 containers: []
	W1003 20:46:49.228496    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:49.228567    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:49.238920    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:46:49.238938    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:49.238944    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:49.277210    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:49.277226    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:49.302665    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:46:49.302682    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:46:49.317469    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:46:49.317496    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:46:49.339707    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:46:49.339724    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:46:49.353286    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:46:49.353304    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:46:49.365576    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:46:49.365588    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:46:49.384228    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:49.384237    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:49.423814    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:46:49.423827    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:46:49.469462    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:46:49.469490    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:46:49.485922    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:46:49.485935    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:46:49.498372    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:46:49.498383    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:46:49.510695    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:46:49.510707    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:49.523599    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:49.523611    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:49.527953    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:46:49.527963    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:46:49.543531    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:46:49.543544    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:46:51.463728    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:51.463757    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:52.057455    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:56.464249    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:56.464307    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:57.059755    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:57.059891    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:57.076636    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:46:57.076722    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:57.087366    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:46:57.087439    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:57.098107    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:46:57.098188    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:57.108391    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:46:57.108466    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:57.118629    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:46:57.118707    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:57.129548    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:46:57.129616    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:57.139559    4416 logs.go:282] 0 containers: []
	W1003 20:46:57.139573    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:57.139638    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:57.150241    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:46:57.150257    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:57.150263    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:57.186821    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:46:57.186830    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:46:57.224934    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:46:57.224948    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:46:57.238802    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:46:57.238812    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:46:57.255708    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:46:57.255719    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:46:57.266999    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:46:57.267009    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:46:57.281052    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:46:57.281062    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:46:57.299464    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:46:57.299475    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:46:57.312752    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:57.312763    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:57.348989    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:46:57.348999    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:46:57.363279    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:46:57.363288    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:46:57.375541    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:57.375550    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:57.400760    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:46:57.400768    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:57.412138    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:57.412150    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:57.416369    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:46:57.416378    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:46:57.431117    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:46:57.431127    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:01.465061    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:01.465117    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:59.945375    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:06.466213    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:06.466254    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1003 20:47:06.792355    4280 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1003 20:47:06.796563    4280 out.go:177] * Enabled addons: storage-provisioner
	I1003 20:47:06.804482    4280 addons.go:510] duration metric: took 30.462723542s for enable addons: enabled=[storage-provisioner]
	I1003 20:47:04.948172    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:04.948367    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:04.963439    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:04.963537    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:04.979449    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:04.979529    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:04.991310    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:04.991382    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:05.001902    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:05.001980    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:05.012267    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:05.012334    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:05.024309    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:05.024385    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:05.034609    4416 logs.go:282] 0 containers: []
	W1003 20:47:05.034621    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:05.034698    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:05.049651    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:05.049671    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:05.049677    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:05.086595    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:05.086603    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:05.098496    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:05.098506    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:05.110477    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:05.110489    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:05.134917    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:05.134925    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:05.149191    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:05.149200    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:05.163421    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:05.163431    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:05.175332    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:05.175341    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:05.179756    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:05.179766    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:05.219969    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:05.219982    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:05.231388    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:05.231403    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:05.246131    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:05.246141    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:05.262658    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:05.262668    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:05.274883    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:05.274896    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:05.309642    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:05.309657    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:05.324429    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:05.324440    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:07.847841    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:11.467379    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:11.467428    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:12.850474    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:12.850713    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:12.873729    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:12.873846    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:12.893192    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:12.893284    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:12.905793    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:12.905864    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:12.916812    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:12.916891    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:12.927114    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:12.927193    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:12.939780    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:12.939860    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:12.951037    4416 logs.go:282] 0 containers: []
	W1003 20:47:12.951050    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:12.951118    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:12.961637    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:12.961656    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:12.961661    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:12.987363    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:12.987373    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:12.998658    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:12.998671    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:13.014651    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:13.014662    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:13.036565    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:13.036574    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:13.048179    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:13.048189    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:13.086681    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:13.086689    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:13.105926    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:13.105934    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:13.119851    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:13.119862    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:13.137516    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:13.137526    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:13.149415    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:13.149425    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:13.163860    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:13.163870    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:13.177695    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:13.177704    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:13.214479    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:13.214493    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:13.228621    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:13.228631    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:13.232748    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:13.232756    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:16.469095    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:16.469158    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:15.770275    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:21.471084    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:21.471107    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:20.772547    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:20.772813    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:20.799838    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:20.799970    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:20.819231    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:20.819324    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:20.832239    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:20.832322    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:20.843249    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:20.843322    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:20.853164    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:20.853238    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:20.864108    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:20.864183    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:20.874466    4416 logs.go:282] 0 containers: []
	W1003 20:47:20.874479    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:20.874543    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:20.892672    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:20.892691    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:20.892696    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:20.906969    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:20.906979    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:20.918582    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:20.918593    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:20.938051    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:20.938062    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:20.955500    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:20.955510    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:20.980759    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:20.980774    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:21.020531    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:21.020543    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:21.037435    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:21.037445    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:21.049228    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:21.049239    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:21.062934    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:21.062945    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:21.075216    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:21.075225    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:21.087425    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:21.087439    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:21.101061    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:21.101074    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:21.105658    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:21.105667    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:21.144927    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:21.144941    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:21.182903    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:21.182917    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:23.700314    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:26.473312    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:26.473357    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:28.702615    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:28.702899    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:28.730728    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:28.730874    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:28.749326    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:28.749401    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:28.762876    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:28.762959    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:28.774584    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:28.774648    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:28.785125    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:28.785198    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:28.795597    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:28.795674    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:28.805996    4416 logs.go:282] 0 containers: []
	W1003 20:47:28.806007    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:28.806069    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:28.816288    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:28.816304    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:28.816310    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:28.880723    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:28.880732    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:28.905395    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:28.905407    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:28.944205    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:28.944218    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:28.958039    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:28.958054    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:28.969800    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:28.969810    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:28.982741    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:28.982752    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:28.987566    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:28.987575    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:29.007738    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:29.007752    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:29.032517    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:29.032534    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:29.044180    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:29.044193    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:29.059174    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:29.059188    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:29.071278    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:29.071289    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:29.089026    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:29.089041    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:29.100245    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:29.100255    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:29.136854    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:29.136861    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:31.475662    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:31.475708    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:31.650713    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:36.477675    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:36.477793    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:36.493234    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:47:36.493304    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:36.503816    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:47:36.503887    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:36.515715    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:47:36.515797    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:36.527442    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:47:36.527518    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:36.538074    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:47:36.538151    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:36.548624    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:47:36.548697    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:36.563207    4280 logs.go:282] 0 containers: []
	W1003 20:47:36.563219    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:36.563314    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:36.573684    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:47:36.573700    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:47:36.573705    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:47:36.585241    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:36.585251    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:36.610011    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:47:36.610021    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:36.622499    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:36.622510    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:36.627688    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:47:36.627695    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:47:36.643068    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:47:36.643082    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:47:36.665300    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:47:36.665310    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:47:36.681636    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:47:36.681651    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:47:36.694372    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:47:36.694383    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:47:36.707283    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:47:36.707294    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:47:36.722697    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:36.722706    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:36.759261    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:36.759272    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:36.848828    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:47:36.848840    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:47:36.651401    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:36.651500    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:36.662773    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:36.662846    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:36.674978    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:36.675065    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:36.686450    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:36.686567    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:36.697462    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:36.697536    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:36.708803    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:36.708882    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:36.720568    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:36.720640    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:36.731769    4416 logs.go:282] 0 containers: []
	W1003 20:47:36.731779    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:36.731847    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:36.742785    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:36.742803    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:36.742809    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:36.758410    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:36.758420    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:36.783229    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:36.783245    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:36.826862    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:36.826880    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:36.842300    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:36.842314    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:36.855647    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:36.855661    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:36.867978    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:36.867989    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:36.903133    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:36.903146    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:36.917751    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:36.917765    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:36.936248    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:36.936257    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:36.953268    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:36.953280    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:36.957585    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:36.957595    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:36.997581    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:36.997592    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:37.012575    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:37.012585    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:37.026748    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:37.026758    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:37.046792    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:37.046803    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:39.559924    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:39.368380    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:44.370730    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:44.370966    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:44.393560    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:47:44.393671    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:44.409024    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:47:44.409114    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:44.421864    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:47:44.421943    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:44.432937    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:47:44.433014    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:44.443052    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:47:44.443130    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:44.453511    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:47:44.453575    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:44.463973    4280 logs.go:282] 0 containers: []
	W1003 20:47:44.463986    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:44.464049    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:44.474281    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:47:44.474297    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:44.474302    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:44.479445    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:47:44.479452    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:47:44.494820    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:47:44.494830    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:47:44.506330    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:47:44.506342    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:47:44.521407    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:47:44.521417    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:47:44.533015    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:47:44.533026    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:47:44.550303    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:47:44.550313    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:47:44.561507    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:44.561515    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:44.599828    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:44.599847    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:44.640262    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:47:44.640270    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:47:44.659765    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:47:44.659779    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:47:44.672589    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:44.672604    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:44.697526    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:47:44.697537    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:47.214611    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:44.560908    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:44.561011    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:44.572645    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:44.572721    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:44.584157    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:44.584236    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:44.595112    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:44.595190    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:44.607015    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:44.607098    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:44.617967    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:44.618045    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:44.629029    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:44.629118    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:44.639913    4416 logs.go:282] 0 containers: []
	W1003 20:47:44.639923    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:44.639989    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:44.654068    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:44.654086    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:44.654093    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:44.692565    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:44.692580    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:44.704560    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:44.704572    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:44.716989    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:44.717000    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:44.729672    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:44.729682    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:44.744299    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:44.744310    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:44.760048    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:44.760058    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:44.771928    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:44.771939    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:44.796697    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:44.796705    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:44.809093    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:44.809104    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:44.847975    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:44.848012    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:44.852028    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:44.852036    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:44.902659    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:44.902669    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:44.918787    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:44.918801    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:44.933948    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:44.933958    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:44.952049    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:44.952057    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:47.465922    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:52.216938    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:52.217202    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:52.231664    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:47:52.231741    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:52.243188    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:47:52.243266    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:52.256885    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:47:52.256960    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:52.268266    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:47:52.268341    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:52.279481    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:47:52.279552    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:52.293944    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:47:52.294016    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:52.304437    4280 logs.go:282] 0 containers: []
	W1003 20:47:52.304446    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:52.304506    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:52.314845    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:47:52.314860    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:47:52.314864    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:47:52.326428    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:52.326442    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:52.331401    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:52.331407    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:52.371948    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:47:52.371958    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:47:52.388042    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:47:52.388056    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:47:52.401813    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:47:52.401826    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:47:52.419794    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:52.419803    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:52.443273    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:47:52.443281    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:52.454469    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:52.454477    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:52.489064    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:47:52.489077    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:47:52.502212    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:47:52.502227    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:47:52.515033    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:47:52.515041    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:47:52.531285    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:47:52.531300    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:47:52.468306    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:52.468398    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:52.479239    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:52.479313    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:52.490953    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:52.491038    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:52.502909    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:52.502983    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:52.514550    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:52.514632    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:52.526842    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:52.526918    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:52.538623    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:52.538704    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:52.549693    4416 logs.go:282] 0 containers: []
	W1003 20:47:52.549702    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:52.549772    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:52.559959    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:52.559975    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:52.559980    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:52.574543    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:52.574554    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:52.611365    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:52.611381    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:52.625858    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:52.625873    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:52.637491    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:52.637500    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:52.660067    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:52.660075    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:52.664030    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:52.664036    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:52.678079    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:52.678094    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:52.691717    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:52.691731    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:52.703168    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:52.703177    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:52.714740    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:52.714750    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:52.732221    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:52.732236    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:52.743682    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:52.743692    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:52.756732    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:52.756747    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:52.792726    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:52.792740    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:52.808689    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:52.808703    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:55.046151    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:55.349093    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:00.048410    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:00.048627    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:00.064567    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:00.064668    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:00.076831    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:00.076911    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:00.088086    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:48:00.088167    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:00.102975    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:00.103047    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:00.113162    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:00.113242    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:00.126760    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:00.126835    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:00.137004    4280 logs.go:282] 0 containers: []
	W1003 20:48:00.137016    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:00.137078    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:00.147257    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:00.147272    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:00.147277    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:00.181656    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:00.181664    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:00.196119    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:00.196128    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:00.207838    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:00.207849    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:00.222405    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:00.222419    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:00.246237    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:00.246246    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:00.259337    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:00.259353    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:00.264087    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:00.264093    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:00.300464    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:00.300478    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:00.318480    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:00.318493    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:00.330638    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:00.330652    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:00.348300    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:00.348312    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:00.366423    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:00.366434    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:02.881999    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:00.351322    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:00.351413    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:00.363232    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:00.363321    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:00.377308    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:00.377390    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:00.388168    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:00.388264    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:00.400549    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:00.400641    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:00.411189    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:00.411262    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:00.421588    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:00.421667    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:00.432174    4416 logs.go:282] 0 containers: []
	W1003 20:48:00.432186    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:00.432253    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:00.445818    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:00.445835    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:00.445840    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:00.464095    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:00.464107    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:00.478055    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:00.478067    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:00.492116    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:00.492126    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:00.506941    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:00.506956    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:00.518554    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:00.518565    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:00.543881    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:00.543892    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:00.581173    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:00.581189    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:00.630443    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:00.630455    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:00.642518    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:00.642529    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:00.657800    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:00.657815    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:00.675144    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:00.675153    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:00.689550    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:00.689560    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:00.725924    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:00.725935    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:00.738543    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:00.738552    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:00.743272    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:00.743279    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:03.256002    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:07.884225    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:07.884359    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:07.896938    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:07.897025    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:07.911460    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:07.911537    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:07.922577    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:48:07.922658    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:07.933384    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:07.933461    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:07.944913    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:07.944993    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:07.955352    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:07.955433    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:07.965591    4280 logs.go:282] 0 containers: []
	W1003 20:48:07.965604    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:07.965667    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:07.975922    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:07.975938    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:07.975943    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:08.012193    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:08.012202    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:08.046764    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:08.046775    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:08.061180    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:08.061196    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:08.072969    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:08.072980    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:08.098027    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:08.098036    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:08.102636    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:08.102644    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:08.116996    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:08.117009    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:08.132953    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:08.132965    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:08.144717    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:08.144731    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:08.159059    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:08.159072    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:08.172067    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:08.172081    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:08.189578    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:08.189591    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:08.258261    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:08.258378    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:08.269603    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:08.269686    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:08.280583    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:08.280662    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:08.290978    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:08.291058    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:08.302128    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:08.302207    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:08.312504    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:08.312572    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:08.323360    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:08.323432    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:08.333365    4416 logs.go:282] 0 containers: []
	W1003 20:48:08.333375    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:08.333433    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:08.343673    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:08.343691    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:08.343696    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:08.382336    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:08.382344    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:08.393689    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:08.393704    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:08.408023    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:08.408036    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:08.419329    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:08.419340    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:08.433588    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:08.433602    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:08.445647    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:08.445660    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:08.460140    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:08.460153    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:08.484806    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:08.484813    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:08.496532    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:08.496545    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:08.515798    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:08.515813    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:08.529455    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:08.529468    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:08.541900    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:08.541913    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:08.545971    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:08.545977    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:08.581591    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:08.581604    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:08.619292    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:08.619306    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:10.702925    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:11.135870    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:15.705243    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:15.705421    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:15.718236    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:15.718321    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:15.728954    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:15.729029    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:15.739120    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:48:15.739198    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:15.749484    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:15.749559    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:15.759753    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:15.759830    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:15.770052    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:15.770129    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:15.780561    4280 logs.go:282] 0 containers: []
	W1003 20:48:15.780573    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:15.780642    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:15.791572    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:15.791587    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:15.791592    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:15.803190    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:15.803200    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:15.816187    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:15.816197    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:15.820863    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:15.820870    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:15.859584    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:15.859595    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:15.874127    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:15.874137    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:15.885630    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:15.885644    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:15.909016    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:15.909030    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:15.924236    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:15.924250    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:15.959778    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:15.959786    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:15.979254    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:15.979263    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:15.991436    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:15.991446    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:16.009321    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:16.009335    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:18.536179    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:16.138218    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:16.138380    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:16.150293    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:16.150377    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:16.169977    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:16.170060    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:16.181952    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:16.182031    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:16.192673    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:16.192757    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:16.202981    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:16.203057    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:16.218808    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:16.218881    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:16.229035    4416 logs.go:282] 0 containers: []
	W1003 20:48:16.229046    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:16.229117    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:16.239536    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:16.239553    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:16.239559    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:16.279232    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:16.279253    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:16.283774    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:16.283782    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:16.319800    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:16.319808    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:16.331611    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:16.331624    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:16.342787    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:16.342800    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:16.358083    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:16.358095    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:16.395506    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:16.395525    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:16.409584    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:16.409597    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:16.432608    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:16.432621    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:16.448775    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:16.448789    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:16.471954    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:16.471962    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:16.483893    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:16.483908    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:16.495437    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:16.495452    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:16.514620    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:16.514633    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:16.525989    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:16.526002    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:19.045179    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:23.538464    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:23.538698    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:23.555357    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:23.555456    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:23.570154    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:23.570235    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:24.047487    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:24.047647    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:24.058571    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:24.058653    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:24.069700    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:24.069770    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:24.079857    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:24.079923    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:24.090971    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:24.091058    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:24.101653    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:24.101723    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:24.111866    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:24.111945    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:24.125324    4416 logs.go:282] 0 containers: []
	W1003 20:48:24.125335    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:24.125400    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:24.135866    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:24.135887    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:24.135893    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:24.148056    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:24.148066    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:24.170440    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:24.170447    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:24.181727    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:24.181742    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:24.185939    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:24.185945    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:24.199905    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:24.199916    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:24.211012    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:24.211023    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:24.235772    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:24.235787    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:24.247455    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:24.247465    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:24.259148    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:24.259162    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:24.297262    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:24.297270    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:24.311066    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:24.311075    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:24.330644    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:24.330655    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:24.354664    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:24.354674    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:24.366833    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:24.366845    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:24.402517    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:24.402532    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:23.581415    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:48:23.581481    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:23.591689    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:23.591757    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:23.601918    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:23.601990    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:23.612661    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:23.612737    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:23.627076    4280 logs.go:282] 0 containers: []
	W1003 20:48:23.627090    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:23.627148    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:23.637733    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:23.637752    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:23.637757    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:23.676108    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:23.676118    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:23.693310    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:23.693318    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:23.708384    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:23.708394    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:23.719825    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:23.719834    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:23.737269    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:23.737284    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:23.748308    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:23.748318    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:23.773086    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:23.773094    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:23.806836    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:23.806846    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:23.820436    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:23.820445    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:23.832115    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:23.832129    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:23.843604    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:23.843618    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:23.856866    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:23.856877    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:26.363153    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:26.942745    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:31.364858    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:31.365108    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:31.382360    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:31.382459    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:31.395339    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:31.395422    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:31.406855    4280 logs.go:282] 2 containers: [0a2b0bd296a5 e68525deae30]
	I1003 20:48:31.406928    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:31.417622    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:31.417691    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:31.428734    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:31.428816    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:31.438967    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:31.439046    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:31.449050    4280 logs.go:282] 0 containers: []
	W1003 20:48:31.449059    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:31.449119    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:31.459818    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:31.459833    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:31.459838    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:31.495992    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:31.496000    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:31.500726    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:31.500735    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:31.512696    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:31.512707    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:31.525115    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:31.525127    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:31.536633    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:31.536643    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:31.561127    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:31.561137    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:31.572158    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:31.572167    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:31.607953    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:31.607965    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:31.622418    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:31.622427    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:31.637010    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:31.637018    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:31.648906    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:31.648915    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:31.663532    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:31.663543    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:31.944559    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:31.944672    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:31.960501    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:31.960591    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:31.971839    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:31.971918    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:31.982879    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:31.982948    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:31.997472    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:31.997541    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:32.008574    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:32.008657    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:32.019847    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:32.019919    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:32.030851    4416 logs.go:282] 0 containers: []
	W1003 20:48:32.030861    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:32.030923    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:32.041324    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:32.041342    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:32.041348    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:32.052613    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:32.052623    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:32.076007    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:32.076021    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:32.080017    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:32.080022    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:32.117399    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:32.117413    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:32.131932    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:32.131944    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:32.146270    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:32.146284    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:32.181137    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:32.181145    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:32.193632    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:32.193643    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:32.205065    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:32.205079    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:32.219577    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:32.219590    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:32.237167    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:32.237181    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:32.248748    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:32.248762    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:32.285113    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:32.285121    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:32.298835    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:32.298844    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:32.312074    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:32.312089    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:34.182892    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:34.828741    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:39.185182    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:39.185387    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:39.203519    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:39.203618    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:39.217722    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:39.217802    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:39.229180    4280 logs.go:282] 3 containers: [6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:48:39.229258    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:39.239279    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:39.239354    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:39.249688    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:39.249759    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:39.262152    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:39.262228    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:39.272261    4280 logs.go:282] 0 containers: []
	W1003 20:48:39.272274    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:39.272342    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:39.282973    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:39.282993    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:39.282999    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:39.288102    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:39.288108    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:39.302357    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:39.302367    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:39.316539    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:39.316549    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:39.328669    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:39.328682    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:39.355120    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:39.355139    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:39.392126    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:39.392140    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:39.406058    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:39.406070    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:39.418154    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:39.418164    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:39.432944    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:39.432958    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:39.444979    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:39.444988    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:39.470440    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:39.470454    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:39.483302    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:39.483314    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:39.517680    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:48:39.517691    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:48:42.031077    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:39.831122    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:39.831392    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:39.852400    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:39.852520    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:39.867292    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:39.867377    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:39.885107    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:39.885185    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:39.895428    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:39.895511    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:39.905715    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:39.905788    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:39.917732    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:39.917810    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:39.928182    4416 logs.go:282] 0 containers: []
	W1003 20:48:39.928193    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:39.928255    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:39.944997    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:39.945013    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:39.945019    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:39.949227    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:39.949237    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:39.963776    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:39.963787    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:39.978254    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:39.978265    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:39.995814    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:39.995824    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:40.007447    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:40.007457    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:40.030561    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:40.030570    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:40.068517    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:40.068527    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:40.103296    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:40.103307    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:40.141405    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:40.141423    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:40.152635    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:40.152646    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:40.167784    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:40.167795    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:40.179139    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:40.179149    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:40.195322    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:40.195332    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:40.210955    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:40.210971    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:40.228054    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:40.228064    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:42.742473    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:47.032945    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:47.033358    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:47.062769    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:47.062911    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:47.080923    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:47.081036    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:47.095108    4280 logs.go:282] 3 containers: [6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:48:47.095192    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:47.106819    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:47.106889    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:47.117302    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:47.117370    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:47.128027    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:47.128102    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:47.137921    4280 logs.go:282] 0 containers: []
	W1003 20:48:47.137931    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:47.138000    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:47.148204    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:47.148222    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:47.148227    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:47.162873    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:47.162887    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:47.174848    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:47.174860    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:47.201288    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:47.201304    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:47.236553    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:48:47.236568    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:48:47.247610    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:47.247623    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:47.259303    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:47.259317    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:47.293992    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:47.294001    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:47.308942    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:47.308952    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:47.325933    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:47.325950    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:47.341374    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:47.341387    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:47.352934    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:47.352944    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:47.370558    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:47.370568    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:47.375508    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:47.375515    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:47.744847    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:47.745019    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:47.763134    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:47.763232    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:47.777551    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:47.777634    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:47.788953    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:47.789035    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:47.799739    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:47.799817    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:47.811615    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:47.811694    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:47.823771    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:47.823845    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:47.834082    4416 logs.go:282] 0 containers: []
	W1003 20:48:47.834092    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:47.834162    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:47.844628    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:47.844645    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:47.844650    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:47.858772    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:47.858788    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:47.872641    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:47.872657    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:47.884779    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:47.884788    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:47.906587    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:47.906594    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:47.943900    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:47.943909    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:47.984651    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:47.984666    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:47.997038    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:47.997049    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:48.011527    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:48.011540    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:48.029274    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:48.029286    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:48.041252    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:48.041267    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:48.045296    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:48.045302    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:48.059663    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:48.059678    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:48.072854    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:48.072865    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:48.107669    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:48.107684    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:48.120063    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:48.120074    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:49.889900    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:50.634257    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:54.890927    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:54.891121    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:54.905492    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:48:54.905573    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:54.916678    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:48:54.916748    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:54.928537    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:48:54.928613    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:54.938601    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:48:54.938672    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:54.949384    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:48:54.949460    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:54.959733    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:48:54.959805    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:54.970438    4280 logs.go:282] 0 containers: []
	W1003 20:48:54.970451    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:54.970512    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:54.985323    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:48:54.985340    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:48:54.985346    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:48:55.000191    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:48:55.000201    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:48:55.011795    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:48:55.011808    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:48:55.023566    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:48:55.023577    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:48:55.044292    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:48:55.044301    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:48:55.055799    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:55.055810    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:55.080897    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:55.080907    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:55.115232    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:48:55.115242    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:55.127516    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:48:55.127528    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:48:55.139458    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:48:55.139467    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:48:55.157608    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:55.157617    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:55.193393    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:55.193405    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:55.197807    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:48:55.197816    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:48:55.212971    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:48:55.212982    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:48:55.224975    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:48:55.224987    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:48:57.742587    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:55.636537    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:55.636723    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:55.647704    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:55.647792    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:55.665515    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:55.665617    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:55.676113    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:55.676201    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:55.689254    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:55.689341    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:55.699570    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:55.699636    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:55.709920    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:55.709998    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:55.720065    4416 logs.go:282] 0 containers: []
	W1003 20:48:55.720075    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:55.720140    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:55.730952    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:55.730970    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:55.730976    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:55.767493    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:55.767503    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:55.782969    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:55.782978    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:55.796952    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:55.796962    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:55.808218    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:55.808233    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:55.812529    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:55.812536    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:55.850195    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:55.850205    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:55.867846    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:55.867860    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:55.879621    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:55.879633    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:55.921924    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:55.921936    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:55.936971    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:55.936985    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:55.948475    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:55.948485    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:55.970895    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:55.970902    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:55.990496    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:55.990506    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:56.003061    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:56.003071    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:56.015215    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:56.015227    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:58.529242    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:02.745032    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:02.745467    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:02.776202    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:02.776346    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:02.794682    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:02.794790    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:02.808771    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:02.808864    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:02.820826    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:02.820903    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:02.831901    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:02.831987    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:02.842052    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:02.842126    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:02.852534    4280 logs.go:282] 0 containers: []
	W1003 20:49:02.852546    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:02.852614    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:02.864451    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:02.864470    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:02.864476    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:02.898487    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:02.898496    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:02.912947    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:02.912959    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:02.924742    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:02.924754    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:02.929431    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:02.929441    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:02.943797    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:02.943809    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:02.957577    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:02.957588    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:02.969474    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:02.969485    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:02.981194    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:02.981206    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:03.013813    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:03.013826    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:03.025721    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:03.025732    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:03.043293    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:03.043306    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:03.067119    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:03.067128    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:03.101772    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:03.101782    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:03.114074    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:03.114084    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:03.531451    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:03.531579    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:03.548636    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:49:03.548720    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:03.563530    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:49:03.563616    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:03.579276    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:49:03.579353    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:03.590253    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:49:03.590339    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:03.605105    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:49:03.605187    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:03.616685    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:49:03.616761    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:03.627078    4416 logs.go:282] 0 containers: []
	W1003 20:49:03.627090    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:03.627153    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:03.637870    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:49:03.637888    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:49:03.637893    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:49:03.651679    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:49:03.651689    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:49:03.673819    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:49:03.673829    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:49:03.686047    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:49:03.686058    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:49:03.704014    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:03.704025    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:03.708486    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:49:03.708495    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:49:03.723021    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:49:03.723032    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:49:03.737683    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:49:03.737692    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:49:03.755021    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:49:03.755030    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:49:03.766097    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:49:03.766107    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:03.777589    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:03.777599    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:03.817199    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:03.817210    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:03.854713    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:49:03.854724    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:49:03.893081    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:49:03.893096    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:49:03.904356    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:49:03.904369    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:49:03.918276    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:03.918286    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:05.627686    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:06.442850    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:10.628180    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:10.628413    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:10.643915    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:10.644009    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:10.656188    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:10.656266    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:10.667345    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:10.667423    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:10.678243    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:10.678314    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:10.692529    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:10.692600    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:10.702803    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:10.702873    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:10.713301    4280 logs.go:282] 0 containers: []
	W1003 20:49:10.713312    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:10.713372    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:10.723313    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:10.723330    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:10.723336    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:10.735488    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:10.735501    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:10.771645    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:10.771655    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:10.808068    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:10.808078    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:10.822281    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:10.822291    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:10.834283    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:10.834293    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:10.839305    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:10.839316    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:10.850530    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:10.850541    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:10.875970    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:10.875979    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:10.887238    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:10.887250    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:10.902227    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:10.902242    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:10.918197    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:10.918207    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:10.929622    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:10.929631    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:10.941998    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:10.942008    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:10.953618    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:10.953630    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:13.477426    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:11.444577    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:11.444816    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:11.467272    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:49:11.467384    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:11.480923    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:49:11.481010    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:11.492838    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:49:11.492913    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:11.503855    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:49:11.503935    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:11.514422    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:49:11.514500    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:11.525537    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:49:11.525620    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:11.542635    4416 logs.go:282] 0 containers: []
	W1003 20:49:11.542647    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:11.542715    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:11.553879    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:49:11.553899    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:11.553906    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:11.557976    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:49:11.557982    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:49:11.573652    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:49:11.573663    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:49:11.588180    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:49:11.588191    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:49:11.603188    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:49:11.603196    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:49:11.617313    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:49:11.617323    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:49:11.629787    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:49:11.629796    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:49:11.648025    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:49:11.648034    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:49:11.662696    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:49:11.662706    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:49:11.674685    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:49:11.674694    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:11.686711    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:11.686720    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:11.725846    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:11.725863    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:11.762017    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:49:11.762027    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:49:11.800561    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:49:11.800573    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:49:11.812058    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:49:11.812069    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:49:11.829796    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:11.829807    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:14.355168    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:18.479767    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:18.480046    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:18.505690    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:18.505805    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:18.522129    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:18.522217    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:18.537242    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:18.537319    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:18.549131    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:18.549211    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:18.559620    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:18.559686    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:19.356692    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:19.356921    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:19.374657    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:49:19.374762    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:19.388243    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:49:19.388322    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:19.402916    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:49:19.402995    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:19.418805    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:49:19.418878    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:19.429617    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:49:19.429700    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:19.441519    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:49:19.441599    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:19.456183    4416 logs.go:282] 0 containers: []
	W1003 20:49:19.456198    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:19.456263    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:19.466614    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:49:19.466630    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:19.466636    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:19.506879    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:49:19.506889    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:49:19.518450    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:49:19.518465    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:49:19.530478    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:19.530488    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:19.552726    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:49:19.552736    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:49:18.579531    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:18.579606    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:18.589903    4280 logs.go:282] 0 containers: []
	W1003 20:49:18.589918    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:18.589983    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:18.599941    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:18.599978    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:18.599986    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:18.614546    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:18.614559    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:18.630304    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:18.630313    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:18.645896    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:18.645910    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:18.657719    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:18.657732    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:18.671847    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:18.671860    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:18.689541    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:18.689550    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:18.702178    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:18.702192    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:18.737081    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:18.737096    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:18.749537    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:18.749550    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:18.774264    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:18.774272    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:18.810100    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:18.810108    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:18.814784    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:18.814791    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:18.826611    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:18.826622    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:18.838177    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:18.838190    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:21.351569    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:19.577075    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:49:19.577086    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:49:19.591042    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:49:19.591052    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:19.604229    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:19.604239    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:19.608414    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:49:19.608421    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:49:19.645765    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:49:19.645775    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:49:19.657382    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:49:19.657394    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:49:19.669510    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:49:19.669523    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:49:19.681370    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:19.681382    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:19.716577    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:49:19.716588    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:49:19.730989    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:49:19.731000    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:49:19.745453    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:49:19.745463    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:49:22.262029    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:26.352849    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:26.352977    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:26.364953    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:26.365037    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:26.375857    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:26.375931    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:26.386411    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:26.386481    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:26.397262    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:26.397339    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:26.407680    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:26.407750    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:26.418320    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:26.418391    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:26.428579    4280 logs.go:282] 0 containers: []
	W1003 20:49:26.428590    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:26.428657    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:26.438910    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:26.438931    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:26.438937    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:26.474871    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:26.474882    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:26.489326    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:26.489340    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:26.500950    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:26.500963    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:26.512913    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:26.512924    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:26.517894    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:26.517902    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:26.553102    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:26.553115    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:26.565602    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:26.565616    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:26.581905    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:26.581916    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:26.594441    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:26.594454    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:26.606393    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:26.606407    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:26.630824    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:26.630833    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:26.642669    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:26.642684    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:26.657215    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:26.657229    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:26.669231    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:26.669241    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:27.264381    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:27.264496    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:27.275788    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:49:27.275879    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:27.286988    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:49:27.287062    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:27.297234    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:49:27.297309    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:27.307456    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:49:27.307538    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:27.321601    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:49:27.321691    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:27.332518    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:49:27.332595    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:27.342204    4416 logs.go:282] 0 containers: []
	W1003 20:49:27.342215    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:27.342280    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:27.352668    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:49:27.352685    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:27.352692    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:27.387989    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:49:27.388000    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:27.399934    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:27.399948    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:27.423134    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:49:27.423145    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:49:27.437292    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:49:27.437301    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:49:27.449103    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:49:27.449115    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:49:27.465103    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:49:27.465113    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:49:27.477743    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:49:27.477754    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:49:27.496675    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:49:27.496686    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:49:27.507831    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:49:27.507842    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:49:27.526245    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:49:27.526255    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:49:27.539092    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:27.539105    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:27.576664    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:27.576672    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:27.580644    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:49:27.580652    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:49:27.619550    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:49:27.619561    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:49:27.638393    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:49:27.638409    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:49:29.189025    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:30.172426    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:34.191305    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:34.191415    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:34.202507    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:34.202595    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:34.214109    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:34.214189    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:34.224915    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:34.225000    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:34.241125    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:34.241201    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:34.256068    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:34.256144    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:34.266800    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:34.266876    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:34.277939    4280 logs.go:282] 0 containers: []
	W1003 20:49:34.277953    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:34.278014    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:34.288691    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:34.288707    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:34.288713    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:34.293151    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:34.293162    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:34.356910    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:34.356922    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:34.369820    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:34.369843    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:34.404335    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:34.404349    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:34.418554    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:34.418563    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:34.430149    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:34.430166    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:34.447810    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:34.447822    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:34.465456    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:34.465470    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:34.477173    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:34.477183    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:34.501674    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:34.501683    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:34.515751    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:34.515763    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:34.527536    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:34.527546    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:34.539896    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:34.539906    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:34.551833    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:34.551847    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:37.068577    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:35.174735    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:35.175196    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:35.203901    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:49:35.204044    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:35.221967    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:49:35.222064    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:35.235136    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:49:35.235215    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:35.246930    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:49:35.246997    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:35.261143    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:49:35.261206    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:35.271945    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:49:35.272019    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:35.291436    4416 logs.go:282] 0 containers: []
	W1003 20:49:35.291447    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:35.291507    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:35.302206    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:49:35.302223    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:49:35.302227    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:49:35.314034    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:35.314048    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:35.335851    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:35.335858    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:35.339889    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:49:35.339896    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:49:35.355898    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:49:35.355907    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:49:35.368064    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:49:35.368078    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:49:35.386048    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:49:35.386062    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:49:35.397196    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:49:35.397206    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:35.408894    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:35.408909    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:35.443417    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:49:35.443427    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:49:35.458037    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:49:35.458052    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:49:35.473023    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:49:35.473037    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:49:35.485133    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:35.485147    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:35.525883    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:49:35.525900    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:49:35.563286    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:49:35.563300    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:49:35.589026    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:49:35.589038    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:49:38.111429    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:42.070889    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:42.071037    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:42.083014    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:42.083087    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:42.093512    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:42.093581    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:42.103868    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:42.103943    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:42.114582    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:42.114644    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:42.129230    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:42.129309    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:42.144927    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:42.145004    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:42.155367    4280 logs.go:282] 0 containers: []
	W1003 20:49:42.155382    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:42.155437    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:42.165936    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:42.165956    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:42.165962    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:42.201342    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:42.201352    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:42.220374    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:42.220385    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:42.234712    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:42.234721    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:42.251467    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:42.251477    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:42.280985    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:42.280998    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:42.294052    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:42.294063    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:42.329185    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:42.329195    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:42.347826    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:42.347837    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:42.363362    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:42.363371    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:42.375079    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:42.375090    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:42.386678    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:42.386688    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:42.391767    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:42.391773    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:42.404101    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:42.404111    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:42.416034    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:42.416044    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:43.113785    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:43.113878    4416 kubeadm.go:597] duration metric: took 4m3.770890792s to restartPrimaryControlPlane
	W1003 20:49:43.113943    4416 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1003 20:49:43.113975    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1003 20:49:44.119520    4416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.005532334s)
	I1003 20:49:44.119967    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:49:44.125446    4416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:49:44.128666    4416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:49:44.131538    4416 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:49:44.131544    4416 kubeadm.go:157] found existing configuration files:
	
	I1003 20:49:44.131576    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/admin.conf
	I1003 20:49:44.134266    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:49:44.134296    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:49:44.136794    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/kubelet.conf
	I1003 20:49:44.139348    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:49:44.139380    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:49:44.142534    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/controller-manager.conf
	I1003 20:49:44.145467    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:49:44.145514    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:49:44.148079    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/scheduler.conf
	I1003 20:49:44.151229    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:49:44.151262    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:49:44.154534    4416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:49:44.173258    4416 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1003 20:49:44.173292    4416 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:49:44.218799    4416 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:49:44.218945    4416 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:49:44.219006    4416 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 20:49:44.272499    4416 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:49:44.275686    4416 out.go:235]   - Generating certificates and keys ...
	I1003 20:49:44.275720    4416 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:49:44.275750    4416 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:49:44.275799    4416 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 20:49:44.275830    4416 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1003 20:49:44.275870    4416 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 20:49:44.275898    4416 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1003 20:49:44.275924    4416 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1003 20:49:44.275973    4416 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1003 20:49:44.276034    4416 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 20:49:44.276098    4416 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 20:49:44.276119    4416 kubeadm.go:310] [certs] Using the existing "sa" key
	I1003 20:49:44.276148    4416 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:49:44.345437    4416 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:49:44.480196    4416 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:49:44.576339    4416 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:49:44.810412    4416 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:49:44.841459    4416 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:49:44.841867    4416 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:49:44.841931    4416 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:49:44.923326    4416 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:49:44.942951    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:44.927560    4416 out.go:235]   - Booting up control plane ...
	I1003 20:49:44.927601    4416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:49:44.927637    4416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:49:44.927666    4416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:49:44.927714    4416 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:49:44.927836    4416 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 20:49:49.432499    4416 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.507279 seconds
	I1003 20:49:49.432570    4416 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:49:49.437097    4416 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:49:49.946367    4416 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:49:49.946488    4416 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-455000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:49:50.450621    4416 kubeadm.go:310] [bootstrap-token] Using token: jk3ppo.aut2r0gvifkpc0xd
	I1003 20:49:50.453790    4416 out.go:235]   - Configuring RBAC rules ...
	I1003 20:49:50.453851    4416 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:49:50.453901    4416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:49:50.459387    4416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:49:50.460445    4416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:49:50.461335    4416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:49:50.463069    4416 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:49:50.466612    4416 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:49:50.645269    4416 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:49:50.854707    4416 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:49:50.855292    4416 kubeadm.go:310] 
	I1003 20:49:50.855329    4416 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:49:50.855332    4416 kubeadm.go:310] 
	I1003 20:49:50.855369    4416 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:49:50.855374    4416 kubeadm.go:310] 
	I1003 20:49:50.855389    4416 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:49:50.855490    4416 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:49:50.855573    4416 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:49:50.855586    4416 kubeadm.go:310] 
	I1003 20:49:50.855663    4416 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:49:50.855671    4416 kubeadm.go:310] 
	I1003 20:49:50.855746    4416 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:49:50.855764    4416 kubeadm.go:310] 
	I1003 20:49:50.855846    4416 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:49:50.855958    4416 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:49:50.856096    4416 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:49:50.856117    4416 kubeadm.go:310] 
	I1003 20:49:50.856173    4416 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:49:50.856213    4416 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:49:50.856235    4416 kubeadm.go:310] 
	I1003 20:49:50.856272    4416 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jk3ppo.aut2r0gvifkpc0xd \
	I1003 20:49:50.856359    4416 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e258f457da7d6d4c594fcb056b26e81a77e78e21226b0ed29090930db50fe5c6 \
	I1003 20:49:50.856371    4416 kubeadm.go:310] 	--control-plane 
	I1003 20:49:50.856374    4416 kubeadm.go:310] 
	I1003 20:49:50.856418    4416 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:49:50.856421    4416 kubeadm.go:310] 
	I1003 20:49:50.856470    4416 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jk3ppo.aut2r0gvifkpc0xd \
	I1003 20:49:50.856526    4416 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e258f457da7d6d4c594fcb056b26e81a77e78e21226b0ed29090930db50fe5c6 
	I1003 20:49:50.856618    4416 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:49:50.856626    4416 cni.go:84] Creating CNI manager for ""
	I1003 20:49:50.856634    4416 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:49:50.860427    4416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1003 20:49:50.868475    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1003 20:49:50.872021    4416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1003 20:49:50.877278    4416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:49:50.877346    4416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:49:50.877374    4416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-455000 minikube.k8s.io/updated_at=2024_10_03T20_49_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=stopped-upgrade-455000 minikube.k8s.io/primary=true
	I1003 20:49:50.880562    4416 ops.go:34] apiserver oom_adj: -16
	I1003 20:49:50.921565    4416 kubeadm.go:1113] duration metric: took 44.279416ms to wait for elevateKubeSystemPrivileges
	I1003 20:49:50.921616    4416 kubeadm.go:394] duration metric: took 4m11.592371125s to StartCluster
	I1003 20:49:50.921628    4416 settings.go:142] acquiring lock: {Name:mkcb41cafeed9afeb88d9d6f184696173f92f60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:49:50.921711    4416 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:49:50.922153    4416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/kubeconfig: {Name:mk3ee3e45466495ab1092989494e731c3b1eb95d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:49:50.922341    4416 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:49:50.922362    4416 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:49:50.922397    4416 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-455000"
	I1003 20:49:50.922405    4416 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-455000"
	W1003 20:49:50.922409    4416 addons.go:243] addon storage-provisioner should already be in state true
	I1003 20:49:50.922408    4416 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-455000"
	I1003 20:49:50.922419    4416 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-455000"
	I1003 20:49:50.922421    4416 host.go:66] Checking if "stopped-upgrade-455000" exists ...
	I1003 20:49:50.922480    4416 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:49:50.923432    4416 kapi.go:59] client config for stopped-upgrade-455000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105c765d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:49:50.923552    4416 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-455000"
	W1003 20:49:50.923556    4416 addons.go:243] addon default-storageclass should already be in state true
	I1003 20:49:50.923563    4416 host.go:66] Checking if "stopped-upgrade-455000" exists ...
	I1003 20:49:50.925471    4416 out.go:177] * Verifying Kubernetes components...
	I1003 20:49:50.925854    4416 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:49:50.929493    4416 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:49:50.929499    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	I1003 20:49:50.933371    4416 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:49:49.945196    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:49.945339    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:49.957401    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:49.957484    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:49.968544    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:49.968616    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:49.985761    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:49.985838    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:49.996899    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:49.996976    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:50.007658    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:50.007730    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:50.018573    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:50.018648    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:50.029894    4280 logs.go:282] 0 containers: []
	W1003 20:49:50.029906    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:50.029977    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:50.041794    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:50.041813    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:50.041819    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:50.057113    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:50.057127    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:50.069128    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:50.069145    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:50.094295    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:50.094306    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:50.109503    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:50.109513    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:50.154016    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:50.154029    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:50.168403    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:50.168417    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:50.180873    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:50.180887    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:50.193205    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:50.193216    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:50.210397    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:50.210407    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:50.230671    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:50.230682    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:50.246012    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:50.246026    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:50.258328    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:50.258339    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:50.292674    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:50.292684    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:50.296978    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:50.296985    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:52.810706    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:50.937292    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:49:50.941422    4416 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:49:50.941428    4416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:49:50.941435    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	I1003 20:49:51.024781    4416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:49:51.030467    4416 api_server.go:52] waiting for apiserver process to appear ...
	I1003 20:49:51.030526    4416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:49:51.035323    4416 api_server.go:72] duration metric: took 112.968375ms to wait for apiserver process to appear ...
	I1003 20:49:51.035331    4416 api_server.go:88] waiting for apiserver healthz status ...
	I1003 20:49:51.035338    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:51.040237    4416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:49:51.057180    4416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:49:51.400350    4416 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:49:51.400363    4416 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:49:57.812927    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:57.813064    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:57.827142    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:49:57.827240    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:57.839452    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:49:57.839526    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:57.850501    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:49:57.850581    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:57.865015    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:49:57.865096    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:57.876034    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:49:57.876109    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:57.887840    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:49:57.887908    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:57.898186    4280 logs.go:282] 0 containers: []
	W1003 20:49:57.898202    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:57.898267    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:57.909418    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:49:57.909434    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:49:57.909441    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:49:57.922084    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:49:57.922095    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:49:57.936187    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:49:57.936198    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:49:57.948451    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:49:57.948462    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:49:57.963231    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:57.963242    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:58.000842    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:58.000849    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:58.005253    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:49:58.005259    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:49:58.019907    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:49:58.019917    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:58.033135    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:58.033151    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:58.068926    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:49:58.068937    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:49:58.081078    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:49:58.081088    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:49:58.093441    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:58.093455    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:58.117848    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:49:58.117857    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:49:58.139826    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:49:58.139837    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:49:58.152633    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:49:58.152647    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:49:56.036117    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:56.036144    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:00.670664    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:01.037393    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:01.037421    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:05.672904    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:05.673107    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:50:05.687275    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:50:05.687365    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:50:05.698187    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:50:05.698268    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:50:05.709725    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:50:05.709811    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:50:05.719981    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:50:05.720060    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:50:05.730811    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:50:05.730887    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:50:05.742344    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:50:05.742421    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:50:05.752483    4280 logs.go:282] 0 containers: []
	W1003 20:50:05.752494    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:50:05.752560    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:50:05.762989    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:50:05.763011    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:50:05.763017    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:50:05.774975    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:50:05.774986    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:50:05.792245    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:50:05.792258    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:50:05.817018    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:50:05.817031    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:50:05.853951    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:50:05.853963    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:50:05.869603    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:50:05.869614    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:50:05.905316    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:50:05.905324    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:50:05.920261    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:50:05.920272    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:50:05.932187    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:50:05.932197    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:50:05.944576    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:50:05.944592    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:50:05.957109    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:50:05.957124    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:50:05.968960    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:50:05.968973    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:50:05.982898    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:50:05.982912    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:50:06.000165    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:50:06.000179    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:50:06.011981    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:50:06.011995    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:50:08.518839    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:06.037601    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:06.037619    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:13.521110    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:13.521237    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:50:13.538441    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:50:13.538516    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:50:13.553031    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:50:13.553121    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:50:13.564074    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:50:13.564156    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:50:13.574893    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:50:13.574983    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:50:11.037877    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:11.037902    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:13.586778    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:50:13.586882    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:50:13.597469    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:50:13.597550    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:50:13.607614    4280 logs.go:282] 0 containers: []
	W1003 20:50:13.607627    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:50:13.607682    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:50:13.618308    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:50:13.618326    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:50:13.618332    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:50:13.652164    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:50:13.652174    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:50:13.663574    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:50:13.663586    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:50:13.674825    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:50:13.674835    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:50:13.700267    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:50:13.700276    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:50:13.705153    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:50:13.705162    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:50:13.720112    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:50:13.720127    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:50:13.734361    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:50:13.734372    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:50:13.749157    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:50:13.749168    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:50:13.760952    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:50:13.760963    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:50:13.778764    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:50:13.778774    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:50:13.815325    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:50:13.815336    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:50:13.827318    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:50:13.827329    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:50:13.839762    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:50:13.839774    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:50:13.852099    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:50:13.852110    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:50:16.368165    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:16.038359    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:16.038401    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:21.039001    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:21.039027    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1003 20:50:21.402220    4416 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1003 20:50:21.406429    4416 out.go:177] * Enabled addons: storage-provisioner
	I1003 20:50:21.370467    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:21.370692    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:50:21.388838    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:50:21.388936    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:50:21.402637    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:50:21.402714    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:50:21.414532    4280 logs.go:282] 4 containers: [dbdc722f9f79 6f01bb70655f 0a2b0bd296a5 e68525deae30]
	I1003 20:50:21.414606    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:50:21.429884    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:50:21.429960    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:50:21.440655    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:50:21.440730    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:50:21.450968    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:50:21.451038    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:50:21.462919    4280 logs.go:282] 0 containers: []
	W1003 20:50:21.462929    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:50:21.462995    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:50:21.473129    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:50:21.473145    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:50:21.473151    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:50:21.487827    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:50:21.487843    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:50:21.502856    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:50:21.502869    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:50:21.537388    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:50:21.537397    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:50:21.572376    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:50:21.572387    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:50:21.584842    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:50:21.584855    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:50:21.589341    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:50:21.589349    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:50:21.601162    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:50:21.601175    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:50:21.613652    4280 logs.go:123] Gathering logs for coredns [e68525deae30] ...
	I1003 20:50:21.613665    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e68525deae30"
	I1003 20:50:21.625277    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:50:21.625289    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:50:21.649839    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:50:21.649848    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:50:21.661803    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:50:21.661816    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:50:21.676266    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:50:21.676278    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:50:21.687981    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:50:21.687995    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:50:21.705571    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:50:21.705585    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:50:21.413399    4416 addons.go:510] duration metric: took 30.491031917s for enable addons: enabled=[storage-provisioner]
	I1003 20:50:24.219563    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:26.039656    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:26.039681    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:29.221878    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:29.222040    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:50:29.235265    4280 logs.go:282] 1 containers: [f0316444a698]
	I1003 20:50:29.235355    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:50:29.246033    4280 logs.go:282] 1 containers: [2b26cbb8b117]
	I1003 20:50:29.246113    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:50:29.257653    4280 logs.go:282] 4 containers: [05fd43da78d5 dbdc722f9f79 6f01bb70655f 0a2b0bd296a5]
	I1003 20:50:29.257722    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:50:29.272090    4280 logs.go:282] 1 containers: [f57d787bfe96]
	I1003 20:50:29.272169    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:50:29.282832    4280 logs.go:282] 1 containers: [4e2449569f5f]
	I1003 20:50:29.282908    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:50:29.293532    4280 logs.go:282] 1 containers: [0a7d220e3a16]
	I1003 20:50:29.293604    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:50:29.310467    4280 logs.go:282] 0 containers: []
	W1003 20:50:29.310480    4280 logs.go:284] No container was found matching "kindnet"
	I1003 20:50:29.310551    4280 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:50:29.320640    4280 logs.go:282] 1 containers: [783681e32dfc]
	I1003 20:50:29.320659    4280 logs.go:123] Gathering logs for etcd [2b26cbb8b117] ...
	I1003 20:50:29.320665    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b26cbb8b117"
	I1003 20:50:29.335366    4280 logs.go:123] Gathering logs for coredns [05fd43da78d5] ...
	I1003 20:50:29.335380    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05fd43da78d5"
	I1003 20:50:29.351021    4280 logs.go:123] Gathering logs for coredns [0a2b0bd296a5] ...
	I1003 20:50:29.351036    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2b0bd296a5"
	I1003 20:50:29.362698    4280 logs.go:123] Gathering logs for kubelet ...
	I1003 20:50:29.362712    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:50:29.398734    4280 logs.go:123] Gathering logs for kube-apiserver [f0316444a698] ...
	I1003 20:50:29.398750    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0316444a698"
	I1003 20:50:29.413408    4280 logs.go:123] Gathering logs for kube-proxy [4e2449569f5f] ...
	I1003 20:50:29.413417    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2449569f5f"
	I1003 20:50:29.424995    4280 logs.go:123] Gathering logs for kube-controller-manager [0a7d220e3a16] ...
	I1003 20:50:29.425005    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7d220e3a16"
	I1003 20:50:29.449564    4280 logs.go:123] Gathering logs for kube-scheduler [f57d787bfe96] ...
	I1003 20:50:29.449579    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57d787bfe96"
	I1003 20:50:29.466023    4280 logs.go:123] Gathering logs for container status ...
	I1003 20:50:29.466032    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:50:29.477380    4280 logs.go:123] Gathering logs for dmesg ...
	I1003 20:50:29.477392    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:50:29.481735    4280 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:50:29.481741    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:50:29.516333    4280 logs.go:123] Gathering logs for storage-provisioner [783681e32dfc] ...
	I1003 20:50:29.516349    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 783681e32dfc"
	I1003 20:50:29.528629    4280 logs.go:123] Gathering logs for Docker ...
	I1003 20:50:29.528640    4280 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:50:29.553546    4280 logs.go:123] Gathering logs for coredns [dbdc722f9f79] ...
	I1003 20:50:29.553555    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbdc722f9f79"
	I1003 20:50:29.565779    4280 logs.go:123] Gathering logs for coredns [6f01bb70655f] ...
	I1003 20:50:29.565793    4280 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f01bb70655f"
	I1003 20:50:32.078642    4280 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:31.040497    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:31.040536    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:37.079982    4280 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:37.084693    4280 out.go:201] 
	W1003 20:50:37.088519    4280 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1003 20:50:37.088528    4280 out.go:270] * 
	W1003 20:50:37.089107    4280 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:50:37.099484    4280 out.go:201] 
	I1003 20:50:36.041609    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:36.041686    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:41.043135    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:41.043160    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:46.044839    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:46.044865    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-10-04 03:41:41 UTC, ends at Fri 2024-10-04 03:50:53 UTC. --
	Oct 04 03:50:37 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:37Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 04 03:50:37 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:37Z" level=error msg="ContainerStats resp: {0x40008a18c0 linux}"
	Oct 04 03:50:37 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:37Z" level=error msg="ContainerStats resp: {0x40008a1a00 linux}"
	Oct 04 03:50:37 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:37Z" level=error msg="ContainerStats resp: {0x40006143c0 linux}"
	Oct 04 03:50:38 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:38Z" level=error msg="ContainerStats resp: {0x40006154c0 linux}"
	Oct 04 03:50:39 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:39Z" level=error msg="ContainerStats resp: {0x4000776b40 linux}"
	Oct 04 03:50:39 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:39Z" level=error msg="ContainerStats resp: {0x4000776c80 linux}"
	Oct 04 03:50:39 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:39Z" level=error msg="ContainerStats resp: {0x4000777340 linux}"
	Oct 04 03:50:39 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:39Z" level=error msg="ContainerStats resp: {0x4000777a80 linux}"
	Oct 04 03:50:39 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:39Z" level=error msg="ContainerStats resp: {0x40001a02c0 linux}"
	Oct 04 03:50:39 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:39Z" level=error msg="ContainerStats resp: {0x400071fdc0 linux}"
	Oct 04 03:50:39 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:39Z" level=error msg="ContainerStats resp: {0x40003a3380 linux}"
	Oct 04 03:50:42 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:42Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 04 03:50:47 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:47Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 04 03:50:49 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:49Z" level=error msg="ContainerStats resp: {0x4000615300 linux}"
	Oct 04 03:50:49 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:49Z" level=error msg="ContainerStats resp: {0x4000746b40 linux}"
	Oct 04 03:50:50 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:50Z" level=error msg="ContainerStats resp: {0x40001a0240 linux}"
	Oct 04 03:50:51 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:51Z" level=error msg="ContainerStats resp: {0x40001a15c0 linux}"
	Oct 04 03:50:51 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:51Z" level=error msg="ContainerStats resp: {0x40001a00c0 linux}"
	Oct 04 03:50:51 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:51Z" level=error msg="ContainerStats resp: {0x400071e5c0 linux}"
	Oct 04 03:50:51 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:51Z" level=error msg="ContainerStats resp: {0x400071ebc0 linux}"
	Oct 04 03:50:51 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:51Z" level=error msg="ContainerStats resp: {0x40001a1440 linux}"
	Oct 04 03:50:51 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:51Z" level=error msg="ContainerStats resp: {0x400071f7c0 linux}"
	Oct 04 03:50:51 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:51Z" level=error msg="ContainerStats resp: {0x400071fbc0 linux}"
	Oct 04 03:50:52 running-upgrade-902000 cri-dockerd[2748]: time="2024-10-04T03:50:52Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	558d98e9e00b7       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   64a16f81ec70f
	05fd43da78d51       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   705456e48bbc9
	dbdc722f9f791       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   64a16f81ec70f
	6f01bb70655ff       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   705456e48bbc9
	4e2449569f5fe       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   c0b8c1c4e9d88
	783681e32dfce       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   b2765c1ba4344
	2b26cbb8b117b       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   c6ac0889e2da0
	0a7d220e3a167       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   bb8d919539ed4
	f0316444a6989       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   70145deb9c983
	f57d787bfe96d       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   64cc763b74937
	
	
	==> coredns [05fd43da78d5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1278530787605533463.901720746753697407. HINFO: read udp 10.244.0.2:50108->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1278530787605533463.901720746753697407. HINFO: read udp 10.244.0.2:47031->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1278530787605533463.901720746753697407. HINFO: read udp 10.244.0.2:51126->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1278530787605533463.901720746753697407. HINFO: read udp 10.244.0.2:43110->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1278530787605533463.901720746753697407. HINFO: read udp 10.244.0.2:38160->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1278530787605533463.901720746753697407. HINFO: read udp 10.244.0.2:50922->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1278530787605533463.901720746753697407. HINFO: read udp 10.244.0.2:60325->10.0.2.3:53: i/o timeout
	
	
	==> coredns [558d98e9e00b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7412259364796046650.4091012060724063154. HINFO: read udp 10.244.0.3:60826->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7412259364796046650.4091012060724063154. HINFO: read udp 10.244.0.3:55710->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7412259364796046650.4091012060724063154. HINFO: read udp 10.244.0.3:36758->10.0.2.3:53: i/o timeout
	
	
	==> coredns [6f01bb70655f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3355410101822826810.4082043914403426568. HINFO: read udp 10.244.0.2:37585->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3355410101822826810.4082043914403426568. HINFO: read udp 10.244.0.2:47412->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3355410101822826810.4082043914403426568. HINFO: read udp 10.244.0.2:48128->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3355410101822826810.4082043914403426568. HINFO: read udp 10.244.0.2:51454->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3355410101822826810.4082043914403426568. HINFO: read udp 10.244.0.2:39732->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3355410101822826810.4082043914403426568. HINFO: read udp 10.244.0.2:42617->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3355410101822826810.4082043914403426568. HINFO: read udp 10.244.0.2:36342->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3355410101822826810.4082043914403426568. HINFO: read udp 10.244.0.2:48886->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3355410101822826810.4082043914403426568. HINFO: read udp 10.244.0.2:33759->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3355410101822826810.4082043914403426568. HINFO: read udp 10.244.0.2:41725->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dbdc722f9f79] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7483158623102895565.3064657647945241015. HINFO: read udp 10.244.0.3:60757->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7483158623102895565.3064657647945241015. HINFO: read udp 10.244.0.3:45317->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7483158623102895565.3064657647945241015. HINFO: read udp 10.244.0.3:38724->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7483158623102895565.3064657647945241015. HINFO: read udp 10.244.0.3:45144->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7483158623102895565.3064657647945241015. HINFO: read udp 10.244.0.3:37338->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7483158623102895565.3064657647945241015. HINFO: read udp 10.244.0.3:39201->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7483158623102895565.3064657647945241015. HINFO: read udp 10.244.0.3:51607->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7483158623102895565.3064657647945241015. HINFO: read udp 10.244.0.3:44889->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7483158623102895565.3064657647945241015. HINFO: read udp 10.244.0.3:54834->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7483158623102895565.3064657647945241015. HINFO: read udp 10.244.0.3:42910->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-902000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-902000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e
	                    minikube.k8s.io/name=running-upgrade-902000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_03T20_46_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 04 Oct 2024 03:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-902000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 04 Oct 2024 03:50:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 04 Oct 2024 03:46:36 +0000   Fri, 04 Oct 2024 03:46:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 04 Oct 2024 03:46:36 +0000   Fri, 04 Oct 2024 03:46:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 04 Oct 2024 03:46:36 +0000   Fri, 04 Oct 2024 03:46:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 04 Oct 2024 03:46:36 +0000   Fri, 04 Oct 2024 03:46:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-902000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 62a5a10710e340fa8e9a7b912d43cdd4
	  System UUID:                62a5a10710e340fa8e9a7b912d43cdd4
	  Boot ID:                    9a18be5d-53a3-4a1a-91c6-921b36efd09c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2qw4t                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-6cmnf                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-902000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-902000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-902000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-lwxk9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-902000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-902000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-902000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-902000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-902000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-902000 event: Registered Node running-upgrade-902000 in Controller
	
	
	==> dmesg <==
	[  +1.704151] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.078218] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.078878] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.133800] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.095252] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.085835] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.299374] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[Oct 4 03:42] systemd-fstab-generator[1955]: Ignoring "noauto" for root device
	[  +2.526574] systemd-fstab-generator[2234]: Ignoring "noauto" for root device
	[  +0.135030] systemd-fstab-generator[2267]: Ignoring "noauto" for root device
	[  +0.093165] systemd-fstab-generator[2278]: Ignoring "noauto" for root device
	[  +0.095307] systemd-fstab-generator[2291]: Ignoring "noauto" for root device
	[  +1.625026] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.135581] systemd-fstab-generator[2705]: Ignoring "noauto" for root device
	[  +0.080356] systemd-fstab-generator[2716]: Ignoring "noauto" for root device
	[  +0.074846] systemd-fstab-generator[2727]: Ignoring "noauto" for root device
	[  +0.083873] systemd-fstab-generator[2741]: Ignoring "noauto" for root device
	[  +2.301596] systemd-fstab-generator[2891]: Ignoring "noauto" for root device
	[  +5.273924] systemd-fstab-generator[3296]: Ignoring "noauto" for root device
	[  +0.968738] systemd-fstab-generator[3423]: Ignoring "noauto" for root device
	[ +19.503471] kauditd_printk_skb: 68 callbacks suppressed
	[Oct 4 03:46] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.457897] systemd-fstab-generator[11633]: Ignoring "noauto" for root device
	[  +5.652633] systemd-fstab-generator[12233]: Ignoring "noauto" for root device
	[  +0.478485] systemd-fstab-generator[12367]: Ignoring "noauto" for root device
	
	
	==> etcd [2b26cbb8b117] <==
	{"level":"info","ts":"2024-10-04T03:46:31.599Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-04T03:46:31.599Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-04T03:46:31.599Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-04T03:46:31.599Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-04T03:46:31.599Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-10-04T03:46:31.599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-04T03:46:31.599Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-04T03:46:32.171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-04T03:46:32.172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-04T03:46:32.172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-04T03:46:32.172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-04T03:46:32.172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-04T03:46:32.172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-04T03:46:32.172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-04T03:46:32.172Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-902000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-04T03:46:32.172Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:46:32.173Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-04T03:46:32.172Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:46:32.173Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-04T03:46:32.173Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-04T03:46:32.172Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-04T03:46:32.175Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-04T03:46:32.176Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:46:32.183Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-04T03:46:32.183Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 03:50:53 up 9 min,  0 users,  load average: 0.44, 0.39, 0.23
	Linux running-upgrade-902000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f0316444a698] <==
	I1004 03:46:33.396031       1 controller.go:611] quota admission added evaluator for: namespaces
	I1004 03:46:33.431918       1 cache.go:39] Caches are synced for autoregister controller
	I1004 03:46:33.431961       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1004 03:46:33.431948       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1004 03:46:33.432172       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1004 03:46:33.434750       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 03:46:33.439228       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1004 03:46:34.174386       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1004 03:46:34.337055       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 03:46:34.340354       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 03:46:34.340466       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 03:46:34.476518       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 03:46:34.486620       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 03:46:34.599400       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1004 03:46:34.601816       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1004 03:46:34.602187       1 controller.go:611] quota admission added evaluator for: endpoints
	I1004 03:46:34.603458       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 03:46:35.468577       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1004 03:46:36.038495       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1004 03:46:36.042203       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1004 03:46:36.079059       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1004 03:46:36.103105       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 03:46:48.674153       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1004 03:46:49.175049       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1004 03:46:49.848448       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [0a7d220e3a16] <==
	I1004 03:46:48.324102       1 shared_informer.go:262] Caches are synced for TTL
	I1004 03:46:48.324160       1 shared_informer.go:262] Caches are synced for PVC protection
	I1004 03:46:48.349717       1 shared_informer.go:262] Caches are synced for taint
	I1004 03:46:48.349785       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1004 03:46:48.349820       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-902000. Assuming now as a timestamp.
	I1004 03:46:48.349853       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1004 03:46:48.349855       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1004 03:46:48.349963       1 event.go:294] "Event occurred" object="running-upgrade-902000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-902000 event: Registered Node running-upgrade-902000 in Controller"
	I1004 03:46:48.416592       1 shared_informer.go:262] Caches are synced for stateful set
	I1004 03:46:48.456237       1 shared_informer.go:262] Caches are synced for deployment
	I1004 03:46:48.465951       1 shared_informer.go:262] Caches are synced for disruption
	I1004 03:46:48.465978       1 disruption.go:371] Sending events to api server.
	I1004 03:46:48.466019       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1004 03:46:48.467028       1 shared_informer.go:262] Caches are synced for cronjob
	I1004 03:46:48.478963       1 shared_informer.go:262] Caches are synced for resource quota
	I1004 03:46:48.484592       1 shared_informer.go:262] Caches are synced for endpoint
	I1004 03:46:48.526400       1 shared_informer.go:262] Caches are synced for resource quota
	I1004 03:46:48.573837       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1004 03:46:48.676704       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lwxk9"
	I1004 03:46:48.944207       1 shared_informer.go:262] Caches are synced for garbage collector
	I1004 03:46:48.996581       1 shared_informer.go:262] Caches are synced for garbage collector
	I1004 03:46:48.996625       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1004 03:46:49.176456       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1004 03:46:49.328764       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2qw4t"
	I1004 03:46:49.333979       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-6cmnf"
	
	
	==> kube-proxy [4e2449569f5f] <==
	I1004 03:46:49.820282       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1004 03:46:49.820377       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1004 03:46:49.820423       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1004 03:46:49.843151       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1004 03:46:49.843162       1 server_others.go:206] "Using iptables Proxier"
	I1004 03:46:49.843176       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1004 03:46:49.843287       1 server.go:661] "Version info" version="v1.24.1"
	I1004 03:46:49.843290       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 03:46:49.846283       1 config.go:317] "Starting service config controller"
	I1004 03:46:49.846298       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1004 03:46:49.846313       1 config.go:226] "Starting endpoint slice config controller"
	I1004 03:46:49.846380       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1004 03:46:49.846339       1 config.go:444] "Starting node config controller"
	I1004 03:46:49.847892       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1004 03:46:49.946816       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1004 03:46:49.946860       1 shared_informer.go:262] Caches are synced for service config
	I1004 03:46:49.951106       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [f57d787bfe96] <==
	W1004 03:46:33.393030       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:46:33.393603       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1004 03:46:33.393041       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 03:46:33.393719       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1004 03:46:33.393052       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 03:46:33.393825       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 03:46:33.393076       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 03:46:33.393876       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1004 03:46:33.393086       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 03:46:33.393914       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 03:46:33.393097       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 03:46:33.393963       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1004 03:46:33.392968       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 03:46:33.393994       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 03:46:33.393155       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1004 03:46:33.393210       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 03:46:33.394725       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 03:46:33.395475       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1004 03:46:34.301801       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 03:46:34.301842       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1004 03:46:34.301953       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 03:46:34.301967       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1004 03:46:34.410501       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 03:46:34.410522       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1004 03:46:34.894663       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-10-04 03:41:41 UTC, ends at Fri 2024-10-04 03:50:53 UTC. --
	Oct 04 03:46:38 running-upgrade-902000 kubelet[12240]: E1004 03:46:38.276231   12240 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-902000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-902000"
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: I1004 03:46:48.303565   12240 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: I1004 03:46:48.304019   12240 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: I1004 03:46:48.354770   12240 topology_manager.go:200] "Topology Admit Handler"
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: I1004 03:46:48.404690   12240 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/53ac35b8-9fe5-4870-aa46-3fb9a0934809-tmp\") pod \"storage-provisioner\" (UID: \"53ac35b8-9fe5-4870-aa46-3fb9a0934809\") " pod="kube-system/storage-provisioner"
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: I1004 03:46:48.404713   12240 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6pvt\" (UniqueName: \"kubernetes.io/projected/53ac35b8-9fe5-4870-aa46-3fb9a0934809-kube-api-access-b6pvt\") pod \"storage-provisioner\" (UID: \"53ac35b8-9fe5-4870-aa46-3fb9a0934809\") " pod="kube-system/storage-provisioner"
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: E1004 03:46:48.508559   12240 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: E1004 03:46:48.508576   12240 projected.go:192] Error preparing data for projected volume kube-api-access-b6pvt for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: E1004 03:46:48.508607   12240 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/53ac35b8-9fe5-4870-aa46-3fb9a0934809-kube-api-access-b6pvt podName:53ac35b8-9fe5-4870-aa46-3fb9a0934809 nodeName:}" failed. No retries permitted until 2024-10-04 03:46:49.008595547 +0000 UTC m=+12.980706541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b6pvt" (UniqueName: "kubernetes.io/projected/53ac35b8-9fe5-4870-aa46-3fb9a0934809-kube-api-access-b6pvt") pod "storage-provisioner" (UID: "53ac35b8-9fe5-4870-aa46-3fb9a0934809") : configmap "kube-root-ca.crt" not found
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: I1004 03:46:48.680213   12240 topology_manager.go:200] "Topology Admit Handler"
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: I1004 03:46:48.706459   12240 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blwsl\" (UniqueName: \"kubernetes.io/projected/9afc344c-9d39-4668-a7ba-7b0f4439acbd-kube-api-access-blwsl\") pod \"kube-proxy-lwxk9\" (UID: \"9afc344c-9d39-4668-a7ba-7b0f4439acbd\") " pod="kube-system/kube-proxy-lwxk9"
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: I1004 03:46:48.706566   12240 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9afc344c-9d39-4668-a7ba-7b0f4439acbd-xtables-lock\") pod \"kube-proxy-lwxk9\" (UID: \"9afc344c-9d39-4668-a7ba-7b0f4439acbd\") " pod="kube-system/kube-proxy-lwxk9"
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: I1004 03:46:48.706593   12240 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9afc344c-9d39-4668-a7ba-7b0f4439acbd-lib-modules\") pod \"kube-proxy-lwxk9\" (UID: \"9afc344c-9d39-4668-a7ba-7b0f4439acbd\") " pod="kube-system/kube-proxy-lwxk9"
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: I1004 03:46:48.706604   12240 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9afc344c-9d39-4668-a7ba-7b0f4439acbd-kube-proxy\") pod \"kube-proxy-lwxk9\" (UID: \"9afc344c-9d39-4668-a7ba-7b0f4439acbd\") " pod="kube-system/kube-proxy-lwxk9"
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: E1004 03:46:48.810063   12240 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: E1004 03:46:48.810145   12240 projected.go:192] Error preparing data for projected volume kube-api-access-blwsl for pod kube-system/kube-proxy-lwxk9: configmap "kube-root-ca.crt" not found
	Oct 04 03:46:48 running-upgrade-902000 kubelet[12240]: E1004 03:46:48.810175   12240 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/9afc344c-9d39-4668-a7ba-7b0f4439acbd-kube-api-access-blwsl podName:9afc344c-9d39-4668-a7ba-7b0f4439acbd nodeName:}" failed. No retries permitted until 2024-10-04 03:46:49.310165914 +0000 UTC m=+13.282276949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-blwsl" (UniqueName: "kubernetes.io/projected/9afc344c-9d39-4668-a7ba-7b0f4439acbd-kube-api-access-blwsl") pod "kube-proxy-lwxk9" (UID: "9afc344c-9d39-4668-a7ba-7b0f4439acbd") : configmap "kube-root-ca.crt" not found
	Oct 04 03:46:49 running-upgrade-902000 kubelet[12240]: I1004 03:46:49.330548   12240 topology_manager.go:200] "Topology Admit Handler"
	Oct 04 03:46:49 running-upgrade-902000 kubelet[12240]: I1004 03:46:49.341038   12240 topology_manager.go:200] "Topology Admit Handler"
	Oct 04 03:46:49 running-upgrade-902000 kubelet[12240]: I1004 03:46:49.412463   12240 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45ef54fa-fbdc-44da-9ecc-33760f2340e8-config-volume\") pod \"coredns-6d4b75cb6d-2qw4t\" (UID: \"45ef54fa-fbdc-44da-9ecc-33760f2340e8\") " pod="kube-system/coredns-6d4b75cb6d-2qw4t"
	Oct 04 03:46:49 running-upgrade-902000 kubelet[12240]: I1004 03:46:49.412501   12240 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k6pm\" (UniqueName: \"kubernetes.io/projected/45ef54fa-fbdc-44da-9ecc-33760f2340e8-kube-api-access-8k6pm\") pod \"coredns-6d4b75cb6d-2qw4t\" (UID: \"45ef54fa-fbdc-44da-9ecc-33760f2340e8\") " pod="kube-system/coredns-6d4b75cb6d-2qw4t"
	Oct 04 03:46:49 running-upgrade-902000 kubelet[12240]: I1004 03:46:49.412513   12240 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc67f0a3-3f46-4cd2-a06f-04f7faa8ba0f-config-volume\") pod \"coredns-6d4b75cb6d-6cmnf\" (UID: \"bc67f0a3-3f46-4cd2-a06f-04f7faa8ba0f\") " pod="kube-system/coredns-6d4b75cb6d-6cmnf"
	Oct 04 03:46:49 running-upgrade-902000 kubelet[12240]: I1004 03:46:49.412524   12240 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkscq\" (UniqueName: \"kubernetes.io/projected/bc67f0a3-3f46-4cd2-a06f-04f7faa8ba0f-kube-api-access-gkscq\") pod \"coredns-6d4b75cb6d-6cmnf\" (UID: \"bc67f0a3-3f46-4cd2-a06f-04f7faa8ba0f\") " pod="kube-system/coredns-6d4b75cb6d-6cmnf"
	Oct 04 03:50:28 running-upgrade-902000 kubelet[12240]: I1004 03:50:28.382799   12240 scope.go:110] "RemoveContainer" containerID="e68525deae3052e71e58abef5a4e8c9495ba00cf1b3efced36f1f7c89a1bb5e0"
	Oct 04 03:50:38 running-upgrade-902000 kubelet[12240]: I1004 03:50:38.450143   12240 scope.go:110] "RemoveContainer" containerID="0a2b0bd296a59733c8bb4e96368b660eadd247ed74aed6fff05e314c0bb5b69b"
	
	
	==> storage-provisioner [783681e32dfc] <==
	I1004 03:46:49.483738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 03:46:49.488967       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 03:46:49.488994       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 03:46:49.492418       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 03:46:49.492533       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-902000_88d87c4b-e8a1-417c-b0b4-39b2b48858b2!
	I1004 03:46:49.492440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"140baacc-4925-48b5-8565-05168f16b034", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-902000_88d87c4b-e8a1-417c-b0b4-39b2b48858b2 became leader
	I1004 03:46:49.592861       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-902000_88d87c4b-e8a1-417c-b0b4-39b2b48858b2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-902000 -n running-upgrade-902000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-902000 -n running-upgrade-902000: exit status 2 (15.629043584s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-902000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-902000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-902000
--- FAIL: TestRunningBinaryUpgrade (621.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-554000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-554000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.892335041s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-554000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-554000" primary control-plane node in "kubernetes-upgrade-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:43:48.280373    4342 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:43:48.280549    4342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:43:48.280555    4342 out.go:358] Setting ErrFile to fd 2...
	I1003 20:43:48.280558    4342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:43:48.280700    4342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:43:48.281903    4342 out.go:352] Setting JSON to false
	I1003 20:43:48.299923    4342 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4399,"bootTime":1728009029,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:43:48.299985    4342 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:43:48.304537    4342 out.go:177] * [kubernetes-upgrade-554000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:43:48.312501    4342 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:43:48.312542    4342 notify.go:220] Checking for updates...
	I1003 20:43:48.319410    4342 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:43:48.322550    4342 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:43:48.325442    4342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:43:48.326761    4342 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:43:48.329484    4342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:43:48.332754    4342 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:43:48.332824    4342 config.go:182] Loaded profile config "running-upgrade-902000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:43:48.332870    4342 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:43:48.337227    4342 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:43:48.344415    4342 start.go:297] selected driver: qemu2
	I1003 20:43:48.344420    4342 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:43:48.344425    4342 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:43:48.346939    4342 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:43:48.350424    4342 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:43:48.353481    4342 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 20:43:48.353496    4342 cni.go:84] Creating CNI manager for ""
	I1003 20:43:48.353524    4342 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 20:43:48.353553    4342 start.go:340] cluster config:
	{Name:kubernetes-upgrade-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:43:48.358085    4342 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:43:48.366417    4342 out.go:177] * Starting "kubernetes-upgrade-554000" primary control-plane node in "kubernetes-upgrade-554000" cluster
	I1003 20:43:48.370415    4342 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1003 20:43:48.370428    4342 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1003 20:43:48.370438    4342 cache.go:56] Caching tarball of preloaded images
	I1003 20:43:48.370506    4342 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:43:48.370511    4342 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1003 20:43:48.370580    4342 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/kubernetes-upgrade-554000/config.json ...
	I1003 20:43:48.370593    4342 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/kubernetes-upgrade-554000/config.json: {Name:mkb187703dae98b8da57714bb577a869570a9efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:43:48.370833    4342 start.go:360] acquireMachinesLock for kubernetes-upgrade-554000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:43:48.370886    4342 start.go:364] duration metric: took 41.625µs to acquireMachinesLock for "kubernetes-upgrade-554000"
	I1003 20:43:48.370897    4342 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:43:48.370920    4342 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:43:48.378398    4342 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:43:48.395352    4342 start.go:159] libmachine.API.Create for "kubernetes-upgrade-554000" (driver="qemu2")
	I1003 20:43:48.395382    4342 client.go:168] LocalClient.Create starting
	I1003 20:43:48.395455    4342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:43:48.395498    4342 main.go:141] libmachine: Decoding PEM data...
	I1003 20:43:48.395506    4342 main.go:141] libmachine: Parsing certificate...
	I1003 20:43:48.395554    4342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:43:48.395583    4342 main.go:141] libmachine: Decoding PEM data...
	I1003 20:43:48.395589    4342 main.go:141] libmachine: Parsing certificate...
	I1003 20:43:48.395974    4342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:43:48.539461    4342 main.go:141] libmachine: Creating SSH key...
	I1003 20:43:48.699350    4342 main.go:141] libmachine: Creating Disk image...
	I1003 20:43:48.699362    4342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:43:48.699610    4342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2
	I1003 20:43:48.709726    4342 main.go:141] libmachine: STDOUT: 
	I1003 20:43:48.709751    4342 main.go:141] libmachine: STDERR: 
	I1003 20:43:48.709812    4342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2 +20000M
	I1003 20:43:48.718372    4342 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:43:48.718385    4342 main.go:141] libmachine: STDERR: 
	I1003 20:43:48.718399    4342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2
	I1003 20:43:48.718404    4342 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:43:48.718416    4342 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:43:48.718455    4342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:91:0a:d5:85:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2
	I1003 20:43:48.720253    4342 main.go:141] libmachine: STDOUT: 
	I1003 20:43:48.720274    4342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:43:48.720296    4342 client.go:171] duration metric: took 324.907541ms to LocalClient.Create
	I1003 20:43:50.722500    4342 start.go:128] duration metric: took 2.351559334s to createHost
	I1003 20:43:50.722518    4342 start.go:83] releasing machines lock for "kubernetes-upgrade-554000", held for 2.351627833s
	W1003 20:43:50.722547    4342 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:43:50.732143    4342 out.go:177] * Deleting "kubernetes-upgrade-554000" in qemu2 ...
	W1003 20:43:50.740885    4342 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:43:50.740892    4342 start.go:729] Will try again in 5 seconds ...
	I1003 20:43:55.743074    4342 start.go:360] acquireMachinesLock for kubernetes-upgrade-554000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:43:55.743611    4342 start.go:364] duration metric: took 455.542µs to acquireMachinesLock for "kubernetes-upgrade-554000"
	I1003 20:43:55.743717    4342 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:43:55.743969    4342 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:43:55.752706    4342 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:43:55.788474    4342 start.go:159] libmachine.API.Create for "kubernetes-upgrade-554000" (driver="qemu2")
	I1003 20:43:55.788523    4342 client.go:168] LocalClient.Create starting
	I1003 20:43:55.788659    4342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:43:55.788744    4342 main.go:141] libmachine: Decoding PEM data...
	I1003 20:43:55.788759    4342 main.go:141] libmachine: Parsing certificate...
	I1003 20:43:55.788813    4342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:43:55.788865    4342 main.go:141] libmachine: Decoding PEM data...
	I1003 20:43:55.788882    4342 main.go:141] libmachine: Parsing certificate...
	I1003 20:43:55.789448    4342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:43:55.936752    4342 main.go:141] libmachine: Creating SSH key...
	I1003 20:43:56.080841    4342 main.go:141] libmachine: Creating Disk image...
	I1003 20:43:56.080850    4342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:43:56.081083    4342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2
	I1003 20:43:56.091385    4342 main.go:141] libmachine: STDOUT: 
	I1003 20:43:56.091403    4342 main.go:141] libmachine: STDERR: 
	I1003 20:43:56.091473    4342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2 +20000M
	I1003 20:43:56.100127    4342 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:43:56.100154    4342 main.go:141] libmachine: STDERR: 
	I1003 20:43:56.100169    4342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2
	I1003 20:43:56.100180    4342 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:43:56.100190    4342 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:43:56.100218    4342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:b8:a9:48:b7:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2
	I1003 20:43:56.102119    4342 main.go:141] libmachine: STDOUT: 
	I1003 20:43:56.102132    4342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:43:56.102144    4342 client.go:171] duration metric: took 313.612708ms to LocalClient.Create
	I1003 20:43:58.104375    4342 start.go:128] duration metric: took 2.360365542s to createHost
	I1003 20:43:58.104461    4342 start.go:83] releasing machines lock for "kubernetes-upgrade-554000", held for 2.360831s
	W1003 20:43:58.104856    4342 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:43:58.115606    4342 out.go:201] 
	W1003 20:43:58.118541    4342 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:43:58.118570    4342 out.go:270] * 
	* 
	W1003 20:43:58.121255    4342 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:43:58.129514    4342 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-554000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-554000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-554000: (3.145439625s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-554000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-554000 status --format={{.Host}}: exit status 7 (59.972417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-554000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-554000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.179722333s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-554000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-554000" primary control-plane node in "kubernetes-upgrade-554000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-554000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-554000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:44:01.382873    4382 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:44:01.383062    4382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:44:01.383068    4382 out.go:358] Setting ErrFile to fd 2...
	I1003 20:44:01.383071    4382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:44:01.383206    4382 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:44:01.384536    4382 out.go:352] Setting JSON to false
	I1003 20:44:01.403010    4382 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4412,"bootTime":1728009029,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:44:01.403076    4382 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:44:01.407843    4382 out.go:177] * [kubernetes-upgrade-554000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:44:01.414851    4382 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:44:01.414895    4382 notify.go:220] Checking for updates...
	I1003 20:44:01.420670    4382 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:44:01.423721    4382 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:44:01.426653    4382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:44:01.429718    4382 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:44:01.432706    4382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:44:01.435953    4382 config.go:182] Loaded profile config "kubernetes-upgrade-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1003 20:44:01.436220    4382 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:44:01.440649    4382 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:44:01.447643    4382 start.go:297] selected driver: qemu2
	I1003 20:44:01.447648    4382 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-554000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:44:01.447692    4382 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:44:01.450151    4382 cni.go:84] Creating CNI manager for ""
	I1003 20:44:01.450190    4382 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:44:01.450215    4382 start.go:340] cluster config:
	{Name:kubernetes-upgrade-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-554000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:44:01.454483    4382 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:44:01.462496    4382 out.go:177] * Starting "kubernetes-upgrade-554000" primary control-plane node in "kubernetes-upgrade-554000" cluster
	I1003 20:44:01.466643    4382 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:44:01.466670    4382 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:44:01.466679    4382 cache.go:56] Caching tarball of preloaded images
	I1003 20:44:01.466763    4382 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:44:01.466769    4382 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:44:01.466824    4382 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/kubernetes-upgrade-554000/config.json ...
	I1003 20:44:01.467155    4382 start.go:360] acquireMachinesLock for kubernetes-upgrade-554000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:44:01.467185    4382 start.go:364] duration metric: took 22.75µs to acquireMachinesLock for "kubernetes-upgrade-554000"
	I1003 20:44:01.467194    4382 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:44:01.467198    4382 fix.go:54] fixHost starting: 
	I1003 20:44:01.467312    4382 fix.go:112] recreateIfNeeded on kubernetes-upgrade-554000: state=Stopped err=<nil>
	W1003 20:44:01.467319    4382 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:44:01.475692    4382 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-554000" ...
	I1003 20:44:01.479697    4382 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:44:01.479731    4382 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:b8:a9:48:b7:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2
	I1003 20:44:01.481978    4382 main.go:141] libmachine: STDOUT: 
	I1003 20:44:01.481995    4382 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:44:01.482023    4382 fix.go:56] duration metric: took 14.823625ms for fixHost
	I1003 20:44:01.482028    4382 start.go:83] releasing machines lock for "kubernetes-upgrade-554000", held for 14.837833ms
	W1003 20:44:01.482034    4382 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:44:01.482070    4382 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:44:01.482074    4382 start.go:729] Will try again in 5 seconds ...
	I1003 20:44:06.484129    4382 start.go:360] acquireMachinesLock for kubernetes-upgrade-554000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:44:06.484251    4382 start.go:364] duration metric: took 107.584µs to acquireMachinesLock for "kubernetes-upgrade-554000"
	I1003 20:44:06.484265    4382 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:44:06.484269    4382 fix.go:54] fixHost starting: 
	I1003 20:44:06.484403    4382 fix.go:112] recreateIfNeeded on kubernetes-upgrade-554000: state=Stopped err=<nil>
	W1003 20:44:06.484408    4382 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:44:06.488740    4382 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-554000" ...
	I1003 20:44:06.496635    4382 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:44:06.496693    4382 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:b8:a9:48:b7:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubernetes-upgrade-554000/disk.qcow2
	I1003 20:44:06.499120    4382 main.go:141] libmachine: STDOUT: 
	I1003 20:44:06.499142    4382 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:44:06.499164    4382 fix.go:56] duration metric: took 14.894667ms for fixHost
	I1003 20:44:06.499170    4382 start.go:83] releasing machines lock for "kubernetes-upgrade-554000", held for 14.914041ms
	W1003 20:44:06.499215    4382 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-554000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-554000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:44:06.507585    4382 out.go:201] 
	W1003 20:44:06.511602    4382 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:44:06.511609    4382 out.go:270] * 
	* 
	W1003 20:44:06.512120    4382 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:44:06.522527    4382 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-554000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-554000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-554000 version --output=json: exit status 1 (28.904875ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-554000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-03 20:44:06.56097 -0700 PDT m=+3399.280471084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-554000 -n kubernetes-upgrade-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-554000 -n kubernetes-upgrade-554000: exit status 7 (31.504625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-554000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-554000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-554000
--- FAIL: TestKubernetesUpgrade (18.41s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.34s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19546
- KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4225291650/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.34s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.07s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19546
- KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2802732660/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (580.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.267806968 start -p stopped-upgrade-455000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.267806968 start -p stopped-upgrade-455000 --memory=2200 --vm-driver=qemu2 : (45.969496542s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.267806968 -p stopped-upgrade-455000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.267806968 -p stopped-upgrade-455000 stop: (12.099416209s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-455000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1003 20:45:54.778292    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:47:38.539455    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:47:51.693992    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-455000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.477221125s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-455000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-455000" primary control-plane node in "stopped-upgrade-455000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-455000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:45:09.560422    4416 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:45:09.560886    4416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:45:09.560890    4416 out.go:358] Setting ErrFile to fd 2...
	I1003 20:45:09.560892    4416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:45:09.561024    4416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:45:09.562350    4416 out.go:352] Setting JSON to false
	I1003 20:45:09.582857    4416 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4480,"bootTime":1728009029,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:45:09.582949    4416 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:45:09.586169    4416 out.go:177] * [stopped-upgrade-455000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:45:09.593234    4416 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:45:09.593374    4416 notify.go:220] Checking for updates...
	I1003 20:45:09.600196    4416 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:45:09.603236    4416 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:45:09.606174    4416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:45:09.609268    4416 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:45:09.612235    4416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:45:09.615507    4416 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:45:09.619172    4416 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1003 20:45:09.622141    4416 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:45:09.626169    4416 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:45:09.633142    4416 start.go:297] selected driver: qemu2
	I1003 20:45:09.633151    4416 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50502 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1003 20:45:09.633216    4416 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:45:09.635987    4416 cni.go:84] Creating CNI manager for ""
	I1003 20:45:09.636024    4416 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:45:09.636059    4416 start.go:340] cluster config:
	{Name:stopped-upgrade-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50502 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1003 20:45:09.636118    4416 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:45:09.640172    4416 out.go:177] * Starting "stopped-upgrade-455000" primary control-plane node in "stopped-upgrade-455000" cluster
	I1003 20:45:09.648179    4416 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1003 20:45:09.648221    4416 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1003 20:45:09.648235    4416 cache.go:56] Caching tarball of preloaded images
	I1003 20:45:09.648371    4416 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:45:09.648386    4416 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1003 20:45:09.648453    4416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/config.json ...
	I1003 20:45:09.648789    4416 start.go:360] acquireMachinesLock for stopped-upgrade-455000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:45:09.648836    4416 start.go:364] duration metric: took 39.459µs to acquireMachinesLock for "stopped-upgrade-455000"
	I1003 20:45:09.648845    4416 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:45:09.648850    4416 fix.go:54] fixHost starting: 
	I1003 20:45:09.648973    4416 fix.go:112] recreateIfNeeded on stopped-upgrade-455000: state=Stopped err=<nil>
	W1003 20:45:09.648984    4416 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:45:09.653209    4416 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-455000" ...
	I1003 20:45:09.661233    4416 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:45:09.661359    4416 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50467-:22,hostfwd=tcp::50468-:2376,hostname=stopped-upgrade-455000 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/disk.qcow2
	I1003 20:45:09.710127    4416 main.go:141] libmachine: STDOUT: 
	I1003 20:45:09.710149    4416 main.go:141] libmachine: STDERR: 
	I1003 20:45:09.710155    4416 main.go:141] libmachine: Waiting for VM to start (ssh -p 50467 docker@127.0.0.1)...
	I1003 20:45:29.884611    4416 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/config.json ...
	I1003 20:45:29.885435    4416 machine.go:93] provisionDockerMachine start ...
	I1003 20:45:29.885601    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:29.886050    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:29.886066    4416 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 20:45:29.959639    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1003 20:45:29.959673    4416 buildroot.go:166] provisioning hostname "stopped-upgrade-455000"
	I1003 20:45:29.959805    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:29.960044    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:29.960056    4416 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-455000 && echo "stopped-upgrade-455000" | sudo tee /etc/hostname
	I1003 20:45:30.030260    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-455000
	
	I1003 20:45:30.030358    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:30.030556    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:30.030569    4416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-455000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-455000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-455000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 20:45:30.091204    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 20:45:30.091217    4416 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19546-1040/.minikube CaCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19546-1040/.minikube}
	I1003 20:45:30.091233    4416 buildroot.go:174] setting up certificates
	I1003 20:45:30.091238    4416 provision.go:84] configureAuth start
	I1003 20:45:30.091245    4416 provision.go:143] copyHostCerts
	I1003 20:45:30.091324    4416 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem, removing ...
	I1003 20:45:30.091332    4416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem
	I1003 20:45:30.091446    4416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.pem (1078 bytes)
	I1003 20:45:30.091677    4416 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem, removing ...
	I1003 20:45:30.091681    4416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem
	I1003 20:45:30.091749    4416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/cert.pem (1123 bytes)
	I1003 20:45:30.091892    4416 exec_runner.go:144] found /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem, removing ...
	I1003 20:45:30.091896    4416 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem
	I1003 20:45:30.091964    4416 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19546-1040/.minikube/key.pem (1675 bytes)
	I1003 20:45:30.092123    4416 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-455000 san=[127.0.0.1 localhost minikube stopped-upgrade-455000]
	I1003 20:45:30.193248    4416 provision.go:177] copyRemoteCerts
	I1003 20:45:30.193294    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 20:45:30.193301    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	I1003 20:45:30.221775    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1003 20:45:30.228945    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 20:45:30.235804    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1003 20:45:30.242280    4416 provision.go:87] duration metric: took 151.034708ms to configureAuth
	I1003 20:45:30.242288    4416 buildroot.go:189] setting minikube options for container-runtime
	I1003 20:45:30.242387    4416 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:45:30.242428    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:30.242514    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:30.242519    4416 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1003 20:45:30.295150    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1003 20:45:30.295158    4416 buildroot.go:70] root file system type: tmpfs
	I1003 20:45:30.295205    4416 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1003 20:45:30.295253    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:30.295342    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:30.295375    4416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1003 20:45:30.352059    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1003 20:45:30.352133    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:30.352253    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:30.352261    4416 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1003 20:45:30.731203    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1003 20:45:30.731216    4416 machine.go:96] duration metric: took 845.770291ms to provisionDockerMachine
	I1003 20:45:30.731224    4416 start.go:293] postStartSetup for "stopped-upgrade-455000" (driver="qemu2")
	I1003 20:45:30.731230    4416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 20:45:30.731307    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 20:45:30.731316    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	I1003 20:45:30.761546    4416 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 20:45:30.762945    4416 info.go:137] Remote host: Buildroot 2021.02.12
	I1003 20:45:30.762950    4416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1040/.minikube/addons for local assets ...
	I1003 20:45:30.763023    4416 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19546-1040/.minikube/files for local assets ...
	I1003 20:45:30.763169    4416 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem -> 15562.pem in /etc/ssl/certs
	I1003 20:45:30.763327    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 20:45:30.766023    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem --> /etc/ssl/certs/15562.pem (1708 bytes)
	I1003 20:45:30.773525    4416 start.go:296] duration metric: took 42.295208ms for postStartSetup
	I1003 20:45:30.773541    4416 fix.go:56] duration metric: took 21.124690584s for fixHost
	I1003 20:45:30.773591    4416 main.go:141] libmachine: Using SSH client type: native
	I1003 20:45:30.773696    4416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10469dc00] 0x1046a0440 <nil>  [] 0s} localhost 50467 <nil> <nil>}
	I1003 20:45:30.773708    4416 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1003 20:45:30.825503    4416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728013531.303175838
	
	I1003 20:45:30.825514    4416 fix.go:216] guest clock: 1728013531.303175838
	I1003 20:45:30.825517    4416 fix.go:229] Guest: 2024-10-03 20:45:31.303175838 -0700 PDT Remote: 2024-10-03 20:45:30.773545 -0700 PDT m=+21.235994626 (delta=529.630838ms)
	I1003 20:45:30.825528    4416 fix.go:200] guest clock delta is within tolerance: 529.630838ms
	I1003 20:45:30.825530    4416 start.go:83] releasing machines lock for "stopped-upgrade-455000", held for 21.176687833s
	I1003 20:45:30.825598    4416 ssh_runner.go:195] Run: cat /version.json
	I1003 20:45:30.825607    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	I1003 20:45:30.825634    4416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 20:45:30.825694    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	W1003 20:45:30.826122    4416 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50467: connect: connection refused
	I1003 20:45:30.826145    4416 retry.go:31] will retry after 374.262735ms: dial tcp [::1]:50467: connect: connection refused
	W1003 20:45:31.257593    4416 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1003 20:45:31.257794    4416 ssh_runner.go:195] Run: systemctl --version
	I1003 20:45:31.262797    4416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 20:45:31.267025    4416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 20:45:31.267102    4416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1003 20:45:31.273367    4416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1003 20:45:31.282401    4416 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1003 20:45:31.282415    4416 start.go:495] detecting cgroup driver to use...
	I1003 20:45:31.282584    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:45:31.292993    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1003 20:45:31.298023    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1003 20:45:31.302252    4416 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1003 20:45:31.302296    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1003 20:45:31.306223    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:45:31.310193    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1003 20:45:31.313906    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1003 20:45:31.317440    4416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 20:45:31.320950    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1003 20:45:31.323966    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1003 20:45:31.327112    4416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1003 20:45:31.330352    4416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 20:45:31.333621    4416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 20:45:31.336603    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:31.418843    4416 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1003 20:45:31.425768    4416 start.go:495] detecting cgroup driver to use...
	I1003 20:45:31.425838    4416 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1003 20:45:31.433278    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:45:31.439771    4416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 20:45:31.445611    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 20:45:31.450333    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:45:31.454922    4416 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1003 20:45:31.486249    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1003 20:45:31.491550    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 20:45:31.497032    4416 ssh_runner.go:195] Run: which cri-dockerd
	I1003 20:45:31.498438    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1003 20:45:31.501560    4416 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1003 20:45:31.506951    4416 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1003 20:45:31.568854    4416 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1003 20:45:31.640875    4416 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1003 20:45:31.640944    4416 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1003 20:45:31.646227    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:31.709355    4416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:45:31.823933    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1003 20:45:31.828402    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:45:31.832791    4416 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1003 20:45:31.896559    4416 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1003 20:45:31.960997    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:32.028876    4416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1003 20:45:32.034876    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1003 20:45:32.039564    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:32.122107    4416 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1003 20:45:32.160556    4416 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1003 20:45:32.160646    4416 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1003 20:45:32.162825    4416 start.go:563] Will wait 60s for crictl version
	I1003 20:45:32.162880    4416 ssh_runner.go:195] Run: which crictl
	I1003 20:45:32.164079    4416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1003 20:45:32.178882    4416 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1003 20:45:32.178954    4416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:45:32.196928    4416 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1003 20:45:32.218522    4416 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1003 20:45:32.218666    4416 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1003 20:45:32.219930    4416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:45:32.223357    4416 kubeadm.go:883] updating cluster {Name:stopped-upgrade-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50502 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1003 20:45:32.223408    4416 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1003 20:45:32.223455    4416 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:45:32.233691    4416 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:45:32.233699    4416 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1003 20:45:32.233756    4416 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:45:32.237473    4416 ssh_runner.go:195] Run: which lz4
	I1003 20:45:32.238842    4416 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1003 20:45:32.240125    4416 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1003 20:45:32.240135    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1003 20:45:33.158320    4416 docker.go:649] duration metric: took 919.516208ms to copy over tarball
	I1003 20:45:33.158391    4416 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1003 20:45:34.356798    4416 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.198394208s)
	I1003 20:45:34.356813    4416 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1003 20:45:34.372116    4416 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1003 20:45:34.374970    4416 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1003 20:45:34.380001    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:34.461229    4416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1003 20:45:36.024123    4416 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.562877125s)
	I1003 20:45:36.024219    4416 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1003 20:45:36.035277    4416 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1003 20:45:36.035288    4416 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1003 20:45:36.035293    4416 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1003 20:45:36.039839    4416 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:45:36.041189    4416 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:45:36.042998    4416 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:45:36.044706    4416 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:45:36.046645    4416 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:45:36.047135    4416 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:45:36.048381    4416 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:45:36.049257    4416 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:45:36.050453    4416 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1003 20:45:36.050528    4416 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:45:36.051738    4416 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:45:36.052095    4416 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1003 20:45:36.053041    4416 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1003 20:45:36.053136    4416 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:45:36.054320    4416 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1003 20:45:36.055056    4416 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:45:38.027624    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:45:38.066043    4416 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1003 20:45:38.066100    4416 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:45:38.066233    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1003 20:45:38.087737    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1003 20:45:38.136884    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1003 20:45:38.153705    4416 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1003 20:45:38.153734    4416 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1003 20:45:38.153817    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1003 20:45:38.167906    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1003 20:45:38.169310    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:45:38.181043    4416 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1003 20:45:38.181066    4416 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:45:38.181132    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1003 20:45:38.183885    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:45:38.191797    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1003 20:45:38.201459    4416 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1003 20:45:38.201481    4416 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:45:38.201538    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1003 20:45:38.211198    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W1003 20:45:38.484165    4416 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1003 20:45:38.484448    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:45:38.503987    4416 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1003 20:45:38.504017    4416 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:45:38.504100    4416 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:45:38.522007    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1003 20:45:38.522166    4416 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1003 20:45:38.523972    4416 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1003 20:45:38.523984    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1003 20:45:38.554514    4416 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1003 20:45:38.554528    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1003 20:45:38.651235    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:45:38.676346    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W1003 20:45:38.684758    4416 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1003 20:45:38.684909    4416 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:45:38.798608    4416 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1003 20:45:38.798641    4416 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1003 20:45:38.798656    4416 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1003 20:45:38.798671    4416 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1003 20:45:38.798671    4416 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:45:38.798702    4416 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1003 20:45:38.798715    4416 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:45:38.798735    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1003 20:45:38.798735    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1003 20:45:38.798756    4416 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1003 20:45:38.815383    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1003 20:45:38.815528    4416 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1003 20:45:38.815808    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1003 20:45:38.815862    4416 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1003 20:45:38.815882    4416 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1003 20:45:38.817025    4416 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1003 20:45:38.817041    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1003 20:45:38.817541    4416 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1003 20:45:38.817555    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1003 20:45:38.830792    4416 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1003 20:45:38.830804    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1003 20:45:38.880680    4416 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1003 20:45:38.883296    4416 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1003 20:45:38.883305    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1003 20:45:38.920498    4416 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1003 20:45:38.920546    4416 cache_images.go:92] duration metric: took 2.885245333s to LoadCachedImages
	W1003 20:45:38.920589    4416 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1003 20:45:38.920594    4416 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1003 20:45:38.920649    4416 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-455000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 20:45:38.920723    4416 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1003 20:45:38.934185    4416 cni.go:84] Creating CNI manager for ""
	I1003 20:45:38.934196    4416 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:45:38.934202    4416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 20:45:38.934213    4416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-455000 NodeName:stopped-upgrade-455000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 20:45:38.934276    4416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-455000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 20:45:38.934346    4416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1003 20:45:38.937910    4416 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 20:45:38.937949    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 20:45:38.941039    4416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1003 20:45:38.946176    4416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 20:45:38.951484    4416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1003 20:45:38.956795    4416 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1003 20:45:38.958069    4416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 20:45:38.962087    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:45:39.045064    4416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:45:39.052384    4416 certs.go:68] Setting up /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000 for IP: 10.0.2.15
	I1003 20:45:39.052395    4416 certs.go:194] generating shared ca certs ...
	I1003 20:45:39.052403    4416 certs.go:226] acquiring lock for ca certs: {Name:mke7121fb3a343b392a0b01a3f973157c3dad296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:45:39.052588    4416 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.key
	I1003 20:45:39.052653    4416 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.key
	I1003 20:45:39.052658    4416 certs.go:256] generating profile certs ...
	I1003 20:45:39.052764    4416 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/client.key
	I1003 20:45:39.052783    4416 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key.849a58cc
	I1003 20:45:39.052796    4416 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt.849a58cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1003 20:45:39.201855    4416 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt.849a58cc ...
	I1003 20:45:39.201868    4416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt.849a58cc: {Name:mk510a964a5e41d0d17a2fd442229e0d87401b0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:45:39.202421    4416 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key.849a58cc ...
	I1003 20:45:39.202428    4416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key.849a58cc: {Name:mkb4398dc0c7ea2a578faad784730f0ad0f2647c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:45:39.202609    4416 certs.go:381] copying /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt.849a58cc -> /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt
	I1003 20:45:39.202756    4416 certs.go:385] copying /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key.849a58cc -> /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key
	I1003 20:45:39.202943    4416 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/proxy-client.key
	I1003 20:45:39.203100    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556.pem (1338 bytes)
	W1003 20:45:39.203134    4416 certs.go:480] ignoring /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556_empty.pem, impossibly tiny 0 bytes
	I1003 20:45:39.203140    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 20:45:39.203162    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem (1078 bytes)
	I1003 20:45:39.203184    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem (1123 bytes)
	I1003 20:45:39.203200    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/key.pem (1675 bytes)
	I1003 20:45:39.203241    4416 certs.go:484] found cert: /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem (1708 bytes)
	I1003 20:45:39.203561    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 20:45:39.210413    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 20:45:39.217749    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 20:45:39.224961    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1003 20:45:39.231743    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 20:45:39.238413    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 20:45:39.245667    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 20:45:39.252979    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 20:45:39.259705    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/1556.pem --> /usr/share/ca-certificates/1556.pem (1338 bytes)
	I1003 20:45:39.266471    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/ssl/certs/15562.pem --> /usr/share/ca-certificates/15562.pem (1708 bytes)
	I1003 20:45:39.273736    4416 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 20:45:39.280737    4416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 20:45:39.285937    4416 ssh_runner.go:195] Run: openssl version
	I1003 20:45:39.287944    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1556.pem && ln -fs /usr/share/ca-certificates/1556.pem /etc/ssl/certs/1556.pem"
	I1003 20:45:39.290823    4416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1556.pem
	I1003 20:45:39.292179    4416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  4 03:05 /usr/share/ca-certificates/1556.pem
	I1003 20:45:39.292205    4416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1556.pem
	I1003 20:45:39.293974    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1556.pem /etc/ssl/certs/51391683.0"
	I1003 20:45:39.297469    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15562.pem && ln -fs /usr/share/ca-certificates/15562.pem /etc/ssl/certs/15562.pem"
	I1003 20:45:39.300788    4416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15562.pem
	I1003 20:45:39.302330    4416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  4 03:05 /usr/share/ca-certificates/15562.pem
	I1003 20:45:39.302351    4416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15562.pem
	I1003 20:45:39.304284    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15562.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 20:45:39.307326    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 20:45:39.310601    4416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:45:39.312052    4416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  4 02:48 /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:45:39.312075    4416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 20:45:39.313704    4416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 20:45:39.316934    4416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 20:45:39.318211    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 20:45:39.320104    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 20:45:39.321938    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 20:45:39.323944    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 20:45:39.325714    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 20:45:39.327526    4416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 20:45:39.329223    4416 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-455000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50502 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-455000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1003 20:45:39.329297    4416 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:45:39.339944    4416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 20:45:39.342951    4416 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1003 20:45:39.342957    4416 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1003 20:45:39.342990    4416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 20:45:39.346886    4416 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 20:45:39.347185    4416 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-455000" does not appear in /Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:45:39.347280    4416 kubeconfig.go:62] /Users/jenkins/minikube-integration/19546-1040/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-455000" cluster setting kubeconfig missing "stopped-upgrade-455000" context setting]
	I1003 20:45:39.347505    4416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/kubeconfig: {Name:mk3ee3e45466495ab1092989494e731c3b1eb95d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:45:39.347957    4416 kapi.go:59] client config for stopped-upgrade-455000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105c765d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:45:39.348314    4416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 20:45:39.351099    4416 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-455000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1003 20:45:39.351104    4416 kubeadm.go:1160] stopping kube-system containers ...
	I1003 20:45:39.351149    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1003 20:45:39.362141    4416 docker.go:483] Stopping containers: [38d603088dfa 61ff45fab245 ce9918a775c3 71c3a5cbd990 ca8f96da5995 f022ceefb216 86798697ade1 77f0409843de]
	I1003 20:45:39.362206    4416 ssh_runner.go:195] Run: docker stop 38d603088dfa 61ff45fab245 ce9918a775c3 71c3a5cbd990 ca8f96da5995 f022ceefb216 86798697ade1 77f0409843de
	I1003 20:45:39.372905    4416 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1003 20:45:39.378324    4416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:45:39.381818    4416 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:45:39.381827    4416 kubeadm.go:157] found existing configuration files:
	
	I1003 20:45:39.381870    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/admin.conf
	I1003 20:45:39.385217    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:45:39.385260    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:45:39.388129    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/kubelet.conf
	I1003 20:45:39.390813    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:45:39.390853    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:45:39.393826    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/controller-manager.conf
	I1003 20:45:39.396757    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:45:39.396788    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:45:39.399260    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/scheduler.conf
	I1003 20:45:39.401909    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:45:39.401942    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:45:39.404849    4416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:45:39.407480    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:45:39.428879    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:45:39.994146    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:45:40.125970    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:45:40.147466    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1003 20:45:40.171032    4416 api_server.go:52] waiting for apiserver process to appear ...
	I1003 20:45:40.171121    4416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:45:40.673243    4416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:45:41.171650    4416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:45:41.176171    4416 api_server.go:72] duration metric: took 1.00513775s to wait for apiserver process to appear ...
	I1003 20:45:41.176184    4416 api_server.go:88] waiting for apiserver healthz status ...
	I1003 20:45:41.176199    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:46.178309    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:46.178364    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:51.178832    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:51.178855    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:45:56.179244    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:45:56.179321    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:01.180083    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:01.180108    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:06.180910    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:06.181003    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:11.182402    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:11.182430    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:16.183781    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:16.183833    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:21.185302    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:21.185333    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:26.185979    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:26.186022    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:31.188316    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:31.188337    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:36.190506    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:36.190531    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:41.192803    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:41.193064    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:41.212035    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:46:41.212138    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:41.226068    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:46:41.226166    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:41.237874    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:46:41.237963    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:41.248369    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:46:41.248442    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:41.258608    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:46:41.258677    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:41.269029    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:46:41.269108    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:41.280111    4416 logs.go:282] 0 containers: []
	W1003 20:46:41.280121    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:41.280187    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:41.290614    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:46:41.290634    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:41.290640    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:41.370920    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:46:41.370935    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:46:41.386154    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:46:41.386164    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:46:41.399898    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:46:41.399907    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:46:41.412323    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:46:41.412333    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:46:41.457762    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:46:41.457772    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:46:41.471109    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:46:41.471122    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:46:41.486283    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:41.486297    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:41.513498    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:41.513506    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:41.551917    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:41.551924    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:41.556050    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:46:41.556056    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:46:41.572491    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:46:41.572505    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:46:41.585033    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:46:41.585043    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:41.600990    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:46:41.601001    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:46:41.612551    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:46:41.612562    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:46:41.624380    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:46:41.624395    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:46:44.144247    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:49.146481    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:49.146660    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:49.160057    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:46:49.160142    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:49.172175    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:46:49.172261    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:49.183592    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:46:49.183671    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:49.194179    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:46:49.194262    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:49.205707    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:46:49.205788    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:49.216451    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:46:49.216525    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:49.228484    4416 logs.go:282] 0 containers: []
	W1003 20:46:49.228496    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:49.228567    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:49.238920    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:46:49.238938    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:49.238944    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:49.277210    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:49.277226    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:49.302665    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:46:49.302682    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:46:49.317469    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:46:49.317496    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:46:49.339707    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:46:49.339724    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:46:49.353286    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:46:49.353304    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:46:49.365576    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:46:49.365588    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:46:49.384228    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:49.384237    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:49.423814    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:46:49.423827    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:46:49.469462    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:46:49.469490    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:46:49.485922    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:46:49.485935    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:46:49.498372    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:46:49.498383    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:46:49.510695    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:46:49.510707    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:49.523599    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:49.523611    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:49.527953    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:46:49.527963    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:46:49.543531    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:46:49.543544    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:46:52.057455    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:46:57.059755    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:46:57.059891    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:46:57.076636    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:46:57.076722    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:46:57.087366    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:46:57.087439    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:46:57.098107    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:46:57.098188    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:46:57.108391    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:46:57.108466    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:46:57.118629    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:46:57.118707    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:46:57.129548    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:46:57.129616    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:46:57.139559    4416 logs.go:282] 0 containers: []
	W1003 20:46:57.139573    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:46:57.139638    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:46:57.150241    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:46:57.150257    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:46:57.150263    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:46:57.186821    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:46:57.186830    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:46:57.224934    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:46:57.224948    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:46:57.238802    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:46:57.238812    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:46:57.255708    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:46:57.255719    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:46:57.266999    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:46:57.267009    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:46:57.281052    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:46:57.281062    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:46:57.299464    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:46:57.299475    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:46:57.312752    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:46:57.312763    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:46:57.348989    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:46:57.348999    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:46:57.363279    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:46:57.363288    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:46:57.375541    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:46:57.375550    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:46:57.400760    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:46:57.400768    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:46:57.412138    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:46:57.412150    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:46:57.416369    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:46:57.416378    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:46:57.431117    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:46:57.431127    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:46:59.945375    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:04.948172    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:04.948367    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:04.963439    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:04.963537    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:04.979449    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:04.979529    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:04.991310    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:04.991382    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:05.001902    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:05.001980    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:05.012267    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:05.012334    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:05.024309    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:05.024385    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:05.034609    4416 logs.go:282] 0 containers: []
	W1003 20:47:05.034621    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:05.034698    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:05.049651    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:05.049671    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:05.049677    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:05.086595    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:05.086603    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:05.098496    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:05.098506    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:05.110477    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:05.110489    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:05.134917    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:05.134925    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:05.149191    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:05.149200    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:05.163421    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:05.163431    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:05.175332    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:05.175341    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:05.179756    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:05.179766    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:05.219969    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:05.219982    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:05.231388    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:05.231403    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:05.246131    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:05.246141    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:05.262658    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:05.262668    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:05.274883    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:05.274896    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:05.309642    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:05.309657    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:05.324429    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:05.324440    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:07.847841    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:12.850474    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:12.850713    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:12.873729    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:12.873846    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:12.893192    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:12.893284    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:12.905793    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:12.905864    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:12.916812    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:12.916891    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:12.927114    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:12.927193    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:12.939780    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:12.939860    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:12.951037    4416 logs.go:282] 0 containers: []
	W1003 20:47:12.951050    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:12.951118    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:12.961637    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:12.961656    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:12.961661    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:12.987363    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:12.987373    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:12.998658    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:12.998671    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:13.014651    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:13.014662    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:13.036565    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:13.036574    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:13.048179    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:13.048189    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:13.086681    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:13.086689    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:13.105926    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:13.105934    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:13.119851    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:13.119862    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:13.137516    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:13.137526    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:13.149415    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:13.149425    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:13.163860    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:13.163870    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:13.177695    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:13.177704    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:13.214479    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:13.214493    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:13.228621    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:13.228631    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:13.232748    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:13.232756    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:15.770275    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:20.772547    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:20.772813    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:20.799838    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:20.799970    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:20.819231    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:20.819324    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:20.832239    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:20.832322    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:20.843249    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:20.843322    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:20.853164    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:20.853238    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:20.864108    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:20.864183    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:20.874466    4416 logs.go:282] 0 containers: []
	W1003 20:47:20.874479    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:20.874543    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:20.892672    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:20.892691    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:20.892696    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:20.906969    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:20.906979    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:20.918582    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:20.918593    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:20.938051    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:20.938062    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:20.955500    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:20.955510    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:20.980759    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:20.980774    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:21.020531    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:21.020543    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:21.037435    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:21.037445    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:21.049228    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:21.049239    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:21.062934    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:21.062945    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:21.075216    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:21.075225    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:21.087425    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:21.087439    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:21.101061    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:21.101074    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:21.105658    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:21.105667    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:21.144927    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:21.144941    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:21.182903    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:21.182917    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:23.700314    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:28.702615    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:28.702899    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:28.730728    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:28.730874    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:28.749326    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:28.749401    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:28.762876    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:28.762959    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:28.774584    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:28.774648    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:28.785125    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:28.785198    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:28.795597    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:28.795674    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:28.805996    4416 logs.go:282] 0 containers: []
	W1003 20:47:28.806007    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:28.806069    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:28.816288    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:28.816304    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:28.816310    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:28.880723    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:28.880732    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:28.905395    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:28.905407    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:28.944205    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:28.944218    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:28.958039    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:28.958054    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:28.969800    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:28.969810    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:28.982741    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:28.982752    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:28.987566    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:28.987575    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:29.007738    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:29.007752    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:29.032517    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:29.032534    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:29.044180    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:29.044193    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:29.059174    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:29.059188    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:29.071278    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:29.071289    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:29.089026    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:29.089041    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:29.100245    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:29.100255    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:29.136854    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:29.136861    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:31.650713    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:36.651401    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:36.651500    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:36.662773    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:36.662846    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:36.674978    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:36.675065    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:36.686450    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:36.686567    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:36.697462    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:36.697536    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:36.708803    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:36.708882    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:36.720568    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:36.720640    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:36.731769    4416 logs.go:282] 0 containers: []
	W1003 20:47:36.731779    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:36.731847    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:36.742785    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:36.742803    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:36.742809    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:36.758410    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:36.758420    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:36.783229    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:36.783245    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:36.826862    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:36.826880    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:36.842300    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:36.842314    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:36.855647    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:36.855661    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:36.867978    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:36.867989    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:36.903133    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:36.903146    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:36.917751    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:36.917765    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:36.936248    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:36.936257    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:36.953268    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:36.953280    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:36.957585    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:36.957595    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:36.997581    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:36.997592    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:37.012575    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:37.012585    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:37.026748    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:37.026758    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:37.046792    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:37.046803    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:39.559924    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:44.560908    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:44.561011    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:44.572645    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:44.572721    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:44.584157    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:44.584236    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:44.595112    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:44.595190    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:44.607015    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:44.607098    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:44.617967    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:44.618045    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:44.629029    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:44.629118    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:44.639913    4416 logs.go:282] 0 containers: []
	W1003 20:47:44.639923    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:44.639989    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:44.654068    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:44.654086    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:44.654093    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:44.692565    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:44.692580    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:44.704560    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:44.704572    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:44.716989    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:44.717000    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:44.729672    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:44.729682    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:44.744299    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:44.744310    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:44.760048    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:44.760058    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:44.771928    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:44.771939    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:44.796697    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:44.796705    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:44.809093    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:44.809104    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:44.847975    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:44.848012    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:44.852028    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:44.852036    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:44.902659    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:44.902669    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:44.918787    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:44.918801    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:44.933948    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:44.933958    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:44.952049    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:44.952057    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:47.465922    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:47:52.468306    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:47:52.468398    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:47:52.479239    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:47:52.479313    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:47:52.490953    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:47:52.491038    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:47:52.502909    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:47:52.502983    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:47:52.514550    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:47:52.514632    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:47:52.526842    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:47:52.526918    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:47:52.538623    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:47:52.538704    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:47:52.549693    4416 logs.go:282] 0 containers: []
	W1003 20:47:52.549702    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:47:52.549772    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:47:52.559959    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:47:52.559975    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:47:52.559980    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:47:52.574543    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:47:52.574554    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:47:52.611365    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:47:52.611381    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:47:52.625858    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:47:52.625873    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:47:52.637491    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:47:52.637500    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:47:52.660067    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:47:52.660075    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:47:52.664030    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:47:52.664036    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:47:52.678079    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:47:52.678094    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:47:52.691717    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:47:52.691731    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:47:52.703168    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:47:52.703177    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:47:52.714740    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:47:52.714750    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:47:52.732221    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:47:52.732236    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:47:52.743682    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:47:52.743692    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:47:52.756732    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:47:52.756747    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:47:52.792726    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:47:52.792740    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:47:52.808689    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:47:52.808703    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:47:55.349093    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:00.351322    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:00.351413    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:00.363232    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:00.363321    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:00.377308    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:00.377390    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:00.388168    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:00.388264    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:00.400549    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:00.400641    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:00.411189    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:00.411262    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:00.421588    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:00.421667    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:00.432174    4416 logs.go:282] 0 containers: []
	W1003 20:48:00.432186    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:00.432253    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:00.445818    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:00.445835    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:00.445840    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:00.464095    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:00.464107    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:00.478055    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:00.478067    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:00.492116    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:00.492126    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:00.506941    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:00.506956    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:00.518554    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:00.518565    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:00.543881    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:00.543892    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:00.581173    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:00.581189    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:00.630443    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:00.630455    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:00.642518    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:00.642529    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:00.657800    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:00.657815    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:00.675144    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:00.675153    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:00.689550    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:00.689560    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:00.725924    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:00.725935    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:00.738543    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:00.738552    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:00.743272    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:00.743279    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:03.256002    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:08.258261    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:08.258378    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:08.269603    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:08.269686    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:08.280583    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:08.280662    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:08.290978    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:08.291058    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:08.302128    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:08.302207    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:08.312504    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:08.312572    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:08.323360    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:08.323432    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:08.333365    4416 logs.go:282] 0 containers: []
	W1003 20:48:08.333375    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:08.333433    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:08.343673    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:08.343691    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:08.343696    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:08.382336    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:08.382344    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:08.393689    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:08.393704    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:08.408023    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:08.408036    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:08.419329    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:08.419340    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:08.433588    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:08.433602    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:08.445647    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:08.445660    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:08.460140    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:08.460153    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:08.484806    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:08.484813    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:08.496532    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:08.496545    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:08.515798    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:08.515813    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:08.529455    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:08.529468    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:08.541900    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:08.541913    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:08.545971    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:08.545977    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:08.581591    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:08.581604    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:08.619292    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:08.619306    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:11.135870    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:16.138218    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:16.138380    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:16.150293    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:16.150377    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:16.169977    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:16.170060    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:16.181952    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:16.182031    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:16.192673    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:16.192757    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:16.202981    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:16.203057    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:16.218808    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:16.218881    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:16.229035    4416 logs.go:282] 0 containers: []
	W1003 20:48:16.229046    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:16.229117    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:16.239536    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:16.239553    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:16.239559    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:16.279232    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:16.279253    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:16.283774    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:16.283782    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:16.319800    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:16.319808    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:16.331611    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:16.331624    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:16.342787    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:16.342800    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:16.358083    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:16.358095    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:16.395506    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:16.395525    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:16.409584    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:16.409597    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:16.432608    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:16.432621    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:16.448775    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:16.448789    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:16.471954    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:16.471962    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:16.483893    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:16.483908    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:16.495437    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:16.495452    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:16.514620    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:16.514633    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:16.525989    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:16.526002    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:19.045179    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:24.047487    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:24.047647    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:24.058571    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:24.058653    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:24.069700    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:24.069770    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:24.079857    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:24.079923    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:24.090971    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:24.091058    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:24.101653    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:24.101723    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:24.111866    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:24.111945    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:24.125324    4416 logs.go:282] 0 containers: []
	W1003 20:48:24.125335    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:24.125400    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:24.135866    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:24.135887    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:24.135893    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:24.148056    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:24.148066    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:24.170440    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:24.170447    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:24.181727    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:24.181742    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:24.185939    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:24.185945    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:24.199905    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:24.199916    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:24.211012    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:24.211023    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:24.235772    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:24.235787    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:24.247455    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:24.247465    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:24.259148    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:24.259162    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:24.297262    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:24.297270    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:24.311066    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:24.311075    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:24.330644    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:24.330655    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:24.354664    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:24.354674    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:24.366833    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:24.366845    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:24.402517    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:24.402532    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:26.942745    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:31.944559    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:31.944672    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:31.960501    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:31.960591    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:31.971839    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:31.971918    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:31.982879    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:31.982948    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:31.997472    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:31.997541    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:32.008574    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:32.008657    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:32.019847    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:32.019919    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:32.030851    4416 logs.go:282] 0 containers: []
	W1003 20:48:32.030861    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:32.030923    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:32.041324    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:32.041342    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:32.041348    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:32.052613    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:32.052623    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:32.076007    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:32.076021    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:32.080017    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:32.080022    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:32.117399    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:32.117413    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:32.131932    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:32.131944    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:32.146270    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:32.146284    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:32.181137    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:32.181145    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:32.193632    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:32.193643    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:32.205065    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:32.205079    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:32.219577    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:32.219590    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:32.237167    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:32.237181    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:32.248748    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:32.248762    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:32.285113    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:32.285121    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:32.298835    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:32.298844    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:32.312074    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:32.312089    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:34.828741    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:39.831122    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:39.831392    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:39.852400    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:39.852520    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:39.867292    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:39.867377    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:39.885107    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:39.885185    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:39.895428    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:39.895511    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:39.905715    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:39.905788    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:39.917732    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:39.917810    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:39.928182    4416 logs.go:282] 0 containers: []
	W1003 20:48:39.928193    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:39.928255    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:39.944997    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:39.945013    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:39.945019    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:39.949227    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:39.949237    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:39.963776    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:39.963787    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:39.978254    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:39.978265    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:39.995814    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:39.995824    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:40.007447    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:40.007457    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:40.030561    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:40.030570    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:40.068517    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:40.068527    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:40.103296    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:40.103307    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:40.141405    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:40.141423    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:40.152635    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:40.152646    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:40.167784    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:40.167795    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:40.179139    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:40.179149    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:40.195322    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:40.195332    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:40.210955    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:40.210971    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:40.228054    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:40.228064    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:42.742473    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:47.744847    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:47.745019    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:47.763134    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:47.763232    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:47.777551    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:47.777634    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:47.788953    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:47.789035    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:47.799739    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:47.799817    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:47.811615    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:47.811694    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:47.823771    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:47.823845    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:47.834082    4416 logs.go:282] 0 containers: []
	W1003 20:48:47.834092    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:47.834162    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:47.844628    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:47.844645    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:47.844650    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:47.858772    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:47.858788    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:47.872641    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:47.872657    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:47.884779    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:47.884788    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:47.906587    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:47.906594    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:47.943900    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:47.943909    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:47.984651    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:47.984666    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:47.997038    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:47.997049    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:48.011527    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:48.011540    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:48.029274    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:48.029286    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:48.041252    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:48.041267    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:48.045296    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:48.045302    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:48.059663    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:48.059678    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:48.072854    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:48.072865    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:48.107669    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:48.107684    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:48.120063    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:48.120074    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:50.634257    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:48:55.636537    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:48:55.636723    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:48:55.647704    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:48:55.647792    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:48:55.665515    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:48:55.665617    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:48:55.676113    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:48:55.676201    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:48:55.689254    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:48:55.689341    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:48:55.699570    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:48:55.699636    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:48:55.709920    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:48:55.709998    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:48:55.720065    4416 logs.go:282] 0 containers: []
	W1003 20:48:55.720075    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:48:55.720140    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:48:55.730952    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:48:55.730970    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:48:55.730976    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:48:55.767493    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:48:55.767503    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:48:55.782969    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:48:55.782978    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:48:55.796952    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:48:55.796962    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:48:55.808218    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:48:55.808233    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:48:55.812529    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:48:55.812536    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:48:55.850195    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:48:55.850205    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:48:55.867846    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:48:55.867860    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:48:55.879621    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:48:55.879633    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:48:55.921924    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:48:55.921936    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:48:55.936971    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:48:55.936985    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:48:55.948475    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:48:55.948485    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:48:55.970895    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:48:55.970902    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:48:55.990496    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:48:55.990506    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:48:56.003061    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:48:56.003071    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:48:56.015215    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:48:56.015227    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:48:58.529242    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:03.531451    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:03.531579    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:03.548636    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:49:03.548720    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:03.563530    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:49:03.563616    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:03.579276    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:49:03.579353    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:03.590253    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:49:03.590339    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:03.605105    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:49:03.605187    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:03.616685    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:49:03.616761    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:03.627078    4416 logs.go:282] 0 containers: []
	W1003 20:49:03.627090    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:03.627153    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:03.637870    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:49:03.637888    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:49:03.637893    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:49:03.651679    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:49:03.651689    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:49:03.673819    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:49:03.673829    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:49:03.686047    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:49:03.686058    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:49:03.704014    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:03.704025    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:03.708486    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:49:03.708495    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:49:03.723021    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:49:03.723032    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:49:03.737683    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:49:03.737692    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:49:03.755021    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:49:03.755030    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:49:03.766097    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:49:03.766107    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:03.777589    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:03.777599    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:03.817199    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:03.817210    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:03.854713    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:49:03.854724    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:49:03.893081    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:49:03.893096    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:49:03.904356    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:49:03.904369    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:49:03.918276    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:03.918286    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:06.442850    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:11.444577    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:11.444816    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:11.467272    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:49:11.467384    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:11.480923    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:49:11.481010    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:11.492838    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:49:11.492913    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:11.503855    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:49:11.503935    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:11.514422    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:49:11.514500    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:11.525537    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:49:11.525620    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:11.542635    4416 logs.go:282] 0 containers: []
	W1003 20:49:11.542647    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:11.542715    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:11.553879    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:49:11.553899    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:11.553906    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:11.557976    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:49:11.557982    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:49:11.573652    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:49:11.573663    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:49:11.588180    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:49:11.588191    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:49:11.603188    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:49:11.603196    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:49:11.617313    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:49:11.617323    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:49:11.629787    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:49:11.629796    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:49:11.648025    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:49:11.648034    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:49:11.662696    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:49:11.662706    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:49:11.674685    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:49:11.674694    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:11.686711    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:11.686720    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:11.725846    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:11.725863    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:11.762017    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:49:11.762027    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:49:11.800561    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:49:11.800573    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:49:11.812058    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:49:11.812069    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:49:11.829796    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:11.829807    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:14.355168    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:19.356692    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:19.356921    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:19.374657    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:49:19.374762    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:19.388243    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:49:19.388322    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:19.402916    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:49:19.402995    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:19.418805    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:49:19.418878    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:19.429617    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:49:19.429700    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:19.441519    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:49:19.441599    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:19.456183    4416 logs.go:282] 0 containers: []
	W1003 20:49:19.456198    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:19.456263    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:19.466614    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:49:19.466630    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:19.466636    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:19.506879    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:49:19.506889    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:49:19.518450    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:49:19.518465    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:49:19.530478    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:19.530488    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:19.552726    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:49:19.552736    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:49:19.577075    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:49:19.577086    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:49:19.591042    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:49:19.591052    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:19.604229    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:19.604239    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:19.608414    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:49:19.608421    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:49:19.645765    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:49:19.645775    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:49:19.657382    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:49:19.657394    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:49:19.669510    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:49:19.669523    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:49:19.681370    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:19.681382    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:19.716577    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:49:19.716588    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:49:19.730989    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:49:19.731000    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:49:19.745453    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:49:19.745463    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:49:22.262029    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:27.264381    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:27.264496    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:27.275788    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:49:27.275879    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:27.286988    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:49:27.287062    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:27.297234    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:49:27.297309    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:27.307456    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:49:27.307538    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:27.321601    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:49:27.321691    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:27.332518    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:49:27.332595    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:27.342204    4416 logs.go:282] 0 containers: []
	W1003 20:49:27.342215    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:27.342280    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:27.352668    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:49:27.352685    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:27.352692    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:27.387989    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:49:27.388000    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:27.399934    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:27.399948    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:27.423134    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:49:27.423145    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:49:27.437292    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:49:27.437301    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:49:27.449103    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:49:27.449115    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:49:27.465103    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:49:27.465113    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:49:27.477743    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:49:27.477754    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:49:27.496675    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:49:27.496686    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:49:27.507831    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:49:27.507842    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:49:27.526245    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:49:27.526255    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:49:27.539092    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:27.539105    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:27.576664    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:27.576672    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:27.580644    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:49:27.580652    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:49:27.619550    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:49:27.619561    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:49:27.638393    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:49:27.638409    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:49:30.172426    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:35.174735    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:35.175196    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:49:35.203901    4416 logs.go:282] 2 containers: [d5e94e411274 ca8f96da5995]
	I1003 20:49:35.204044    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:49:35.221967    4416 logs.go:282] 2 containers: [e2c67b4fa7eb 86798697ade1]
	I1003 20:49:35.222064    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:49:35.235136    4416 logs.go:282] 1 containers: [16379c4ccc7c]
	I1003 20:49:35.235215    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:49:35.246930    4416 logs.go:282] 2 containers: [866af1c6382b 61ff45fab245]
	I1003 20:49:35.246997    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:49:35.261143    4416 logs.go:282] 1 containers: [28b115e47598]
	I1003 20:49:35.261206    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:49:35.271945    4416 logs.go:282] 2 containers: [16ef02dff517 38d603088dfa]
	I1003 20:49:35.272019    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:49:35.291436    4416 logs.go:282] 0 containers: []
	W1003 20:49:35.291447    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:49:35.291507    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:49:35.302206    4416 logs.go:282] 1 containers: [b2f9f64f7de2]
	I1003 20:49:35.302223    4416 logs.go:123] Gathering logs for kube-controller-manager [38d603088dfa] ...
	I1003 20:49:35.302227    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38d603088dfa"
	I1003 20:49:35.314034    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:49:35.314048    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:49:35.335851    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:49:35.335858    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:49:35.339889    4416 logs.go:123] Gathering logs for etcd [e2c67b4fa7eb] ...
	I1003 20:49:35.339896    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2c67b4fa7eb"
	I1003 20:49:35.355898    4416 logs.go:123] Gathering logs for kube-scheduler [866af1c6382b] ...
	I1003 20:49:35.355907    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 866af1c6382b"
	I1003 20:49:35.368064    4416 logs.go:123] Gathering logs for kube-controller-manager [16ef02dff517] ...
	I1003 20:49:35.368078    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16ef02dff517"
	I1003 20:49:35.386048    4416 logs.go:123] Gathering logs for storage-provisioner [b2f9f64f7de2] ...
	I1003 20:49:35.386062    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2f9f64f7de2"
	I1003 20:49:35.397196    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:49:35.397206    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:49:35.408894    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:49:35.408909    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:49:35.443417    4416 logs.go:123] Gathering logs for kube-apiserver [d5e94e411274] ...
	I1003 20:49:35.443427    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5e94e411274"
	I1003 20:49:35.458037    4416 logs.go:123] Gathering logs for etcd [86798697ade1] ...
	I1003 20:49:35.458052    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86798697ade1"
	I1003 20:49:35.473023    4416 logs.go:123] Gathering logs for kube-proxy [28b115e47598] ...
	I1003 20:49:35.473037    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b115e47598"
	I1003 20:49:35.485133    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:49:35.485147    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:49:35.525883    4416 logs.go:123] Gathering logs for kube-apiserver [ca8f96da5995] ...
	I1003 20:49:35.525900    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca8f96da5995"
	I1003 20:49:35.563286    4416 logs.go:123] Gathering logs for coredns [16379c4ccc7c] ...
	I1003 20:49:35.563300    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16379c4ccc7c"
	I1003 20:49:35.589026    4416 logs.go:123] Gathering logs for kube-scheduler [61ff45fab245] ...
	I1003 20:49:35.589038    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61ff45fab245"
	I1003 20:49:38.111429    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:43.113785    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:43.113878    4416 kubeadm.go:597] duration metric: took 4m3.770890792s to restartPrimaryControlPlane
	W1003 20:49:43.113943    4416 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1003 20:49:43.113975    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1003 20:49:44.119520    4416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.005532334s)
	I1003 20:49:44.119967    4416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 20:49:44.125446    4416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 20:49:44.128666    4416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 20:49:44.131538    4416 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 20:49:44.131544    4416 kubeadm.go:157] found existing configuration files:
	
	I1003 20:49:44.131576    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/admin.conf
	I1003 20:49:44.134266    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 20:49:44.134296    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 20:49:44.136794    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/kubelet.conf
	I1003 20:49:44.139348    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 20:49:44.139380    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 20:49:44.142534    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/controller-manager.conf
	I1003 20:49:44.145467    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 20:49:44.145514    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 20:49:44.148079    4416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/scheduler.conf
	I1003 20:49:44.151229    4416 kubeadm.go:163] "https://control-plane.minikube.internal:50502" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50502 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 20:49:44.151262    4416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 20:49:44.154534    4416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1003 20:49:44.173258    4416 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1003 20:49:44.173292    4416 kubeadm.go:310] [preflight] Running pre-flight checks
	I1003 20:49:44.218799    4416 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 20:49:44.218945    4416 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 20:49:44.219006    4416 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1003 20:49:44.272499    4416 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 20:49:44.275686    4416 out.go:235]   - Generating certificates and keys ...
	I1003 20:49:44.275720    4416 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1003 20:49:44.275750    4416 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1003 20:49:44.275799    4416 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 20:49:44.275830    4416 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1003 20:49:44.275870    4416 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 20:49:44.275898    4416 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1003 20:49:44.275924    4416 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1003 20:49:44.275973    4416 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1003 20:49:44.276034    4416 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 20:49:44.276098    4416 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 20:49:44.276119    4416 kubeadm.go:310] [certs] Using the existing "sa" key
	I1003 20:49:44.276148    4416 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 20:49:44.345437    4416 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 20:49:44.480196    4416 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 20:49:44.576339    4416 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 20:49:44.810412    4416 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 20:49:44.841459    4416 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 20:49:44.841867    4416 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 20:49:44.841931    4416 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1003 20:49:44.923326    4416 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 20:49:44.927560    4416 out.go:235]   - Booting up control plane ...
	I1003 20:49:44.927601    4416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 20:49:44.927637    4416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 20:49:44.927666    4416 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 20:49:44.927714    4416 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 20:49:44.927836    4416 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1003 20:49:49.432499    4416 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.507279 seconds
	I1003 20:49:49.432570    4416 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 20:49:49.437097    4416 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 20:49:49.946367    4416 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 20:49:49.946488    4416 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-455000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 20:49:50.450621    4416 kubeadm.go:310] [bootstrap-token] Using token: jk3ppo.aut2r0gvifkpc0xd
	I1003 20:49:50.453790    4416 out.go:235]   - Configuring RBAC rules ...
	I1003 20:49:50.453851    4416 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 20:49:50.453901    4416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 20:49:50.459387    4416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 20:49:50.460445    4416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 20:49:50.461335    4416 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 20:49:50.463069    4416 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 20:49:50.466612    4416 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 20:49:50.645269    4416 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1003 20:49:50.854707    4416 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1003 20:49:50.855292    4416 kubeadm.go:310] 
	I1003 20:49:50.855329    4416 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1003 20:49:50.855332    4416 kubeadm.go:310] 
	I1003 20:49:50.855369    4416 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1003 20:49:50.855374    4416 kubeadm.go:310] 
	I1003 20:49:50.855389    4416 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1003 20:49:50.855490    4416 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 20:49:50.855573    4416 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 20:49:50.855586    4416 kubeadm.go:310] 
	I1003 20:49:50.855663    4416 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1003 20:49:50.855671    4416 kubeadm.go:310] 
	I1003 20:49:50.855746    4416 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 20:49:50.855764    4416 kubeadm.go:310] 
	I1003 20:49:50.855846    4416 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1003 20:49:50.855958    4416 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 20:49:50.856096    4416 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 20:49:50.856117    4416 kubeadm.go:310] 
	I1003 20:49:50.856173    4416 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 20:49:50.856213    4416 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1003 20:49:50.856235    4416 kubeadm.go:310] 
	I1003 20:49:50.856272    4416 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jk3ppo.aut2r0gvifkpc0xd \
	I1003 20:49:50.856359    4416 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e258f457da7d6d4c594fcb056b26e81a77e78e21226b0ed29090930db50fe5c6 \
	I1003 20:49:50.856371    4416 kubeadm.go:310] 	--control-plane 
	I1003 20:49:50.856374    4416 kubeadm.go:310] 
	I1003 20:49:50.856418    4416 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1003 20:49:50.856421    4416 kubeadm.go:310] 
	I1003 20:49:50.856470    4416 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jk3ppo.aut2r0gvifkpc0xd \
	I1003 20:49:50.856526    4416 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e258f457da7d6d4c594fcb056b26e81a77e78e21226b0ed29090930db50fe5c6 
	I1003 20:49:50.856618    4416 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 20:49:50.856626    4416 cni.go:84] Creating CNI manager for ""
	I1003 20:49:50.856634    4416 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:49:50.860427    4416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1003 20:49:50.868475    4416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1003 20:49:50.872021    4416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1003 20:49:50.877278    4416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 20:49:50.877346    4416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 20:49:50.877374    4416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-455000 minikube.k8s.io/updated_at=2024_10_03T20_49_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=bb93d8722461655cd69aaff21bc3938f9e86d89e minikube.k8s.io/name=stopped-upgrade-455000 minikube.k8s.io/primary=true
	I1003 20:49:50.880562    4416 ops.go:34] apiserver oom_adj: -16
	I1003 20:49:50.921565    4416 kubeadm.go:1113] duration metric: took 44.279416ms to wait for elevateKubeSystemPrivileges
	I1003 20:49:50.921616    4416 kubeadm.go:394] duration metric: took 4m11.592371125s to StartCluster
	I1003 20:49:50.921628    4416 settings.go:142] acquiring lock: {Name:mkcb41cafeed9afeb88d9d6f184696173f92f60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:49:50.921711    4416 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:49:50.922153    4416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/kubeconfig: {Name:mk3ee3e45466495ab1092989494e731c3b1eb95d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:49:50.922341    4416 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:49:50.922362    4416 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 20:49:50.922397    4416 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-455000"
	I1003 20:49:50.922405    4416 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-455000"
	W1003 20:49:50.922409    4416 addons.go:243] addon storage-provisioner should already be in state true
	I1003 20:49:50.922408    4416 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-455000"
	I1003 20:49:50.922419    4416 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-455000"
	I1003 20:49:50.922421    4416 host.go:66] Checking if "stopped-upgrade-455000" exists ...
	I1003 20:49:50.922480    4416 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:49:50.923432    4416 kapi.go:59] client config for stopped-upgrade-455000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/stopped-upgrade-455000/client.key", CAFile:"/Users/jenkins/minikube-integration/19546-1040/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105c765d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 20:49:50.923552    4416 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-455000"
	W1003 20:49:50.923556    4416 addons.go:243] addon default-storageclass should already be in state true
	I1003 20:49:50.923563    4416 host.go:66] Checking if "stopped-upgrade-455000" exists ...
	I1003 20:49:50.925471    4416 out.go:177] * Verifying Kubernetes components...
	I1003 20:49:50.925854    4416 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 20:49:50.929493    4416 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 20:49:50.929499    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	I1003 20:49:50.933371    4416 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 20:49:50.937292    4416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 20:49:50.941422    4416 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:49:50.941428    4416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 20:49:50.941435    4416 sshutil.go:53] new ssh client: &{IP:localhost Port:50467 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/stopped-upgrade-455000/id_rsa Username:docker}
	I1003 20:49:51.024781    4416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 20:49:51.030467    4416 api_server.go:52] waiting for apiserver process to appear ...
	I1003 20:49:51.030526    4416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 20:49:51.035323    4416 api_server.go:72] duration metric: took 112.968375ms to wait for apiserver process to appear ...
	I1003 20:49:51.035331    4416 api_server.go:88] waiting for apiserver healthz status ...
	I1003 20:49:51.035338    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:49:51.040237    4416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 20:49:51.057180    4416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 20:49:51.400350    4416 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 20:49:51.400363    4416 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 20:49:56.036117    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:49:56.036144    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:01.037393    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:01.037421    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:06.037601    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:06.037619    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:11.037877    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:11.037902    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:16.038359    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:16.038401    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:21.039001    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:21.039027    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1003 20:50:21.402220    4416 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1003 20:50:21.406429    4416 out.go:177] * Enabled addons: storage-provisioner
	I1003 20:50:21.413399    4416 addons.go:510] duration metric: took 30.491031917s for enable addons: enabled=[storage-provisioner]
	I1003 20:50:26.039656    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:26.039681    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:31.040497    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:31.040536    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:36.041609    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:36.041686    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:41.043135    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:41.043160    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:46.044839    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:46.044865    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:51.046964    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:51.047094    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:50:51.057878    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:50:51.057959    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:50:51.068071    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:50:51.068150    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:50:51.079437    4416 logs.go:282] 2 containers: [6add665ec5b3 02baafe22d8e]
	I1003 20:50:51.079513    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:50:51.089841    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:50:51.089923    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:50:51.100311    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:50:51.100387    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:50:51.114717    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:50:51.114787    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:50:51.124550    4416 logs.go:282] 0 containers: []
	W1003 20:50:51.124560    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:50:51.124619    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:50:51.134951    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:50:51.134969    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:50:51.134975    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:50:51.170842    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:50:51.170855    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:50:51.183016    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:50:51.183027    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:50:51.198357    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:50:51.198367    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:50:51.216141    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:50:51.216151    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:50:51.228238    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:50:51.228247    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:50:51.252491    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:50:51.252500    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:50:51.264031    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:50:51.264043    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:50:51.299585    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:50:51.299593    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:50:51.306365    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:50:51.306375    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:50:51.321465    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:50:51.321478    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:50:51.336475    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:50:51.336490    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:50:51.347701    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:50:51.347711    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:50:53.861369    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:50:58.863628    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:50:58.863782    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:50:58.878112    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:50:58.878200    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:50:58.890617    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:50:58.890695    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:50:58.901358    4416 logs.go:282] 2 containers: [6add665ec5b3 02baafe22d8e]
	I1003 20:50:58.901440    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:50:58.912108    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:50:58.912188    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:50:58.922974    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:50:58.923053    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:50:58.933414    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:50:58.933489    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:50:58.943998    4416 logs.go:282] 0 containers: []
	W1003 20:50:58.944009    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:50:58.944076    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:50:58.955235    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:50:58.955250    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:50:58.955256    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:50:58.966781    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:50:58.966790    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:50:58.984042    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:50:58.984051    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:50:58.996197    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:50:58.996208    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:50:59.010479    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:50:59.010488    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:50:59.024565    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:50:59.024575    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:50:59.036017    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:50:59.036027    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:50:59.054412    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:50:59.054428    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:50:59.079148    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:50:59.079156    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:50:59.090696    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:50:59.090706    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:50:59.126391    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:50:59.126399    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:50:59.130701    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:50:59.130708    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:50:59.164446    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:50:59.164456    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:51:01.684502    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:51:06.686803    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:51:06.686993    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:51:06.699468    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:51:06.699555    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:51:06.710293    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:51:06.710365    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:51:06.722990    4416 logs.go:282] 2 containers: [6add665ec5b3 02baafe22d8e]
	I1003 20:51:06.723071    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:51:06.733993    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:51:06.734071    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:51:06.744385    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:51:06.744454    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:51:06.754376    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:51:06.754440    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:51:06.764595    4416 logs.go:282] 0 containers: []
	W1003 20:51:06.764608    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:51:06.764672    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:51:06.775229    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:51:06.775245    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:51:06.775253    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:51:06.790574    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:51:06.790584    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:51:06.802292    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:51:06.802302    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:51:06.813773    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:51:06.813783    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:51:06.848972    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:51:06.848981    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:51:06.883038    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:51:06.883049    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:51:06.897556    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:51:06.897568    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:51:06.909437    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:51:06.909449    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:51:06.921405    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:51:06.921416    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:51:06.925798    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:51:06.925809    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:51:06.939281    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:51:06.939292    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:51:06.957015    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:51:06.957024    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:51:06.981284    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:51:06.981292    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:51:09.494089    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:51:14.496379    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:51:14.496777    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:51:14.529191    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:51:14.529333    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:51:14.548232    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:51:14.548319    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:51:14.564456    4416 logs.go:282] 2 containers: [6add665ec5b3 02baafe22d8e]
	I1003 20:51:14.564541    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:51:14.578281    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:51:14.578347    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:51:14.588626    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:51:14.588700    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:51:14.599212    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:51:14.599288    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:51:14.609583    4416 logs.go:282] 0 containers: []
	W1003 20:51:14.609594    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:51:14.609661    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:51:14.619749    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:51:14.619763    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:51:14.619769    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:51:14.637193    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:51:14.637204    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:51:14.648608    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:51:14.648621    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:51:14.671930    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:51:14.671939    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:51:14.682867    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:51:14.682879    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:51:14.696832    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:51:14.696843    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:51:14.708323    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:51:14.708336    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:51:14.721064    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:51:14.721079    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:51:14.735752    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:51:14.735762    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:51:14.752913    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:51:14.752923    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:51:14.787189    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:51:14.787198    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:51:14.791163    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:51:14.791171    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:51:14.823781    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:51:14.823797    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:51:17.340432    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:51:22.342707    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:51:22.342882    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:51:22.358170    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:51:22.358256    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:51:22.369487    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:51:22.369559    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:51:22.381348    4416 logs.go:282] 2 containers: [6add665ec5b3 02baafe22d8e]
	I1003 20:51:22.381430    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:51:22.391743    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:51:22.391817    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:51:22.405669    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:51:22.405749    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:51:22.417082    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:51:22.417157    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:51:22.427568    4416 logs.go:282] 0 containers: []
	W1003 20:51:22.427579    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:51:22.427641    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:51:22.437632    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:51:22.437648    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:51:22.437653    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:51:22.449023    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:51:22.449035    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:51:22.470999    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:51:22.471012    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:51:22.488988    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:51:22.489000    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:51:22.512770    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:51:22.512780    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:51:22.547068    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:51:22.547075    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:51:22.582008    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:51:22.582025    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:51:22.596989    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:51:22.596999    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:51:22.608344    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:51:22.608354    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:51:22.620725    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:51:22.620734    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:51:22.625396    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:51:22.625405    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:51:22.639532    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:51:22.639543    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:51:22.651182    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:51:22.651192    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:51:25.164476    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:51:30.166732    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:51:30.166945    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:51:30.189694    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:51:30.189817    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:51:30.204379    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:51:30.204475    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:51:30.221363    4416 logs.go:282] 2 containers: [6add665ec5b3 02baafe22d8e]
	I1003 20:51:30.221433    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:51:30.231662    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:51:30.231722    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:51:30.241908    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:51:30.241984    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:51:30.252119    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:51:30.252189    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:51:30.264173    4416 logs.go:282] 0 containers: []
	W1003 20:51:30.264185    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:51:30.264249    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:51:30.274772    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:51:30.274788    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:51:30.274794    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:51:30.310140    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:51:30.310151    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:51:30.330023    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:51:30.330033    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:51:30.341905    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:51:30.341920    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:51:30.353706    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:51:30.353716    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:51:30.368563    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:51:30.368573    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:51:30.380350    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:51:30.380359    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:51:30.397630    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:51:30.397640    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:51:30.409363    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:51:30.409374    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:51:30.420679    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:51:30.420689    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:51:30.424830    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:51:30.424838    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:51:30.460343    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:51:30.460352    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:51:30.481262    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:51:30.481272    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:51:33.006242    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:51:38.009002    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:51:38.009494    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:51:38.050007    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:51:38.050154    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:51:38.072689    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:51:38.072814    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:51:38.087645    4416 logs.go:282] 2 containers: [6add665ec5b3 02baafe22d8e]
	I1003 20:51:38.087732    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:51:38.100153    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:51:38.100229    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:51:38.110921    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:51:38.111002    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:51:38.128751    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:51:38.128825    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:51:38.139346    4416 logs.go:282] 0 containers: []
	W1003 20:51:38.139357    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:51:38.139426    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:51:38.149742    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:51:38.149758    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:51:38.149763    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:51:38.183609    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:51:38.183618    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:51:38.198001    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:51:38.198013    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:51:38.210357    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:51:38.210366    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:51:38.225493    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:51:38.225503    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:51:38.237321    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:51:38.237333    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:51:38.260191    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:51:38.260200    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:51:38.272268    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:51:38.272281    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:51:38.276592    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:51:38.276599    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:51:38.311790    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:51:38.311802    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:51:38.326159    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:51:38.326173    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:51:38.338081    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:51:38.338095    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:51:38.349912    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:51:38.349922    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:51:40.869440    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:51:45.872054    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:51:45.872342    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:51:45.898999    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:51:45.899128    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:51:45.915922    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:51:45.916014    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:51:45.928993    4416 logs.go:282] 2 containers: [6add665ec5b3 02baafe22d8e]
	I1003 20:51:45.929080    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:51:45.940192    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:51:45.940259    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:51:45.950398    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:51:45.950461    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:51:45.961432    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:51:45.961511    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:51:45.971925    4416 logs.go:282] 0 containers: []
	W1003 20:51:45.971936    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:51:45.971995    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:51:45.984231    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:51:45.984242    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:51:45.984247    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:51:45.995788    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:51:45.995799    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:51:46.012868    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:51:46.012878    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:51:46.024231    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:51:46.024241    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:51:46.036554    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:51:46.036563    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:51:46.072478    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:51:46.072484    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:51:46.110543    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:51:46.110559    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:51:46.124845    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:51:46.124854    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:51:46.140335    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:51:46.140350    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:51:46.155413    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:51:46.155422    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:51:46.177961    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:51:46.177971    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:51:46.182177    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:51:46.182185    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:51:46.195953    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:51:46.195961    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:51:48.709943    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:51:53.712320    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:51:53.712538    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:51:53.736391    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:51:53.736515    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:51:53.752160    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:51:53.752258    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:51:53.765714    4416 logs.go:282] 2 containers: [6add665ec5b3 02baafe22d8e]
	I1003 20:51:53.765796    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:51:53.777254    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:51:53.777327    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:51:53.787379    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:51:53.787446    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:51:53.799555    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:51:53.799615    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:51:53.809287    4416 logs.go:282] 0 containers: []
	W1003 20:51:53.809303    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:51:53.809367    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:51:53.819647    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:51:53.819661    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:51:53.819667    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:51:53.830693    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:51:53.830701    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:51:53.854456    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:51:53.854462    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:51:53.865730    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:51:53.865740    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:51:53.901465    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:51:53.901472    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:51:53.905934    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:51:53.905942    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:51:53.940473    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:51:53.940486    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:51:53.954758    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:51:53.954769    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:51:53.968294    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:51:53.968304    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:51:53.980333    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:51:53.980344    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:51:53.994862    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:51:53.994872    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:51:54.006805    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:51:54.006814    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:51:54.024794    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:51:54.024805    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:51:56.537474    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:52:01.538219    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:52:01.538770    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:52:01.579923    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:52:01.580083    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:52:01.602083    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:52:01.602204    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:52:01.616982    4416 logs.go:282] 2 containers: [6add665ec5b3 02baafe22d8e]
	I1003 20:52:01.617072    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:52:01.629727    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:52:01.629796    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:52:01.641021    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:52:01.641099    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:52:01.652921    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:52:01.652997    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:52:01.663440    4416 logs.go:282] 0 containers: []
	W1003 20:52:01.663453    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:52:01.663518    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:52:01.674135    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:52:01.674152    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:52:01.674157    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:52:01.688657    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:52:01.688669    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:52:01.705341    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:52:01.705354    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:52:01.723949    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:52:01.723959    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:52:01.735425    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:52:01.735433    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:52:01.750388    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:52:01.750396    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:52:01.761535    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:52:01.761543    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:52:01.784829    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:52:01.784838    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:52:01.820169    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:52:01.820179    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:52:01.825016    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:52:01.825025    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:52:01.837738    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:52:01.837748    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:52:01.855762    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:52:01.855773    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:52:01.870810    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:52:01.870820    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:52:04.406425    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:52:09.409191    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:52:09.409702    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:52:09.449961    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:52:09.450116    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:52:09.471022    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:52:09.471143    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:52:09.486653    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:52:09.486732    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:52:09.499624    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:52:09.499707    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:52:09.509945    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:52:09.510023    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:52:09.520168    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:52:09.520239    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:52:09.530501    4416 logs.go:282] 0 containers: []
	W1003 20:52:09.530521    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:52:09.530577    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:52:09.540855    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:52:09.540874    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:52:09.540880    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:52:09.552672    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:52:09.552683    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:52:09.567055    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:52:09.567064    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:52:09.584984    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:52:09.584994    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:52:09.620809    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:52:09.620818    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:52:09.657365    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:52:09.657376    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:52:09.671773    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:52:09.671784    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:52:09.689821    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:52:09.689830    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:52:09.701729    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:52:09.701740    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:52:09.713530    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:52:09.713540    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:52:09.725298    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:52:09.725313    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:52:09.750485    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:52:09.750492    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:52:09.762036    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:52:09.762048    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:52:09.766376    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:52:09.766383    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:52:09.777775    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:52:09.777789    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:52:12.291635    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:52:17.294402    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:52:17.294576    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:52:17.318940    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:52:17.319033    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:52:17.333326    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:52:17.333404    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:52:17.345839    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:52:17.345917    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:52:17.356375    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:52:17.356455    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:52:17.371800    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:52:17.371882    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:52:17.382417    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:52:17.382492    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:52:17.392324    4416 logs.go:282] 0 containers: []
	W1003 20:52:17.392338    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:52:17.392395    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:52:17.402757    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:52:17.402775    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:52:17.402795    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:52:17.419990    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:52:17.419999    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:52:17.431407    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:52:17.431418    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:52:17.445619    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:52:17.445629    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:52:17.459634    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:52:17.459644    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:52:17.470652    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:52:17.470662    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:52:17.482001    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:52:17.482011    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:52:17.516834    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:52:17.516844    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:52:17.528641    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:52:17.528651    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:52:17.540267    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:52:17.540277    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:52:17.544597    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:52:17.544605    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:52:17.559716    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:52:17.559726    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:52:17.571322    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:52:17.571333    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:52:17.596051    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:52:17.596058    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:52:17.607377    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:52:17.607389    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:52:20.145071    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:52:25.147723    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:52:25.148014    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:52:25.174598    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:52:25.174725    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:52:25.191688    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:52:25.191780    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:52:25.204566    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:52:25.204637    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:52:25.216244    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:52:25.216304    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:52:25.227417    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:52:25.227483    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:52:25.239085    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:52:25.239154    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:52:25.249292    4416 logs.go:282] 0 containers: []
	W1003 20:52:25.249303    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:52:25.249372    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:52:25.259247    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:52:25.259263    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:52:25.259268    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:52:25.270581    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:52:25.270594    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:52:25.282377    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:52:25.282393    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:52:25.293662    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:52:25.293672    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:52:25.327576    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:52:25.327589    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:52:25.338836    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:52:25.338852    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:52:25.353500    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:52:25.353510    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:52:25.371317    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:52:25.371326    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:52:25.383432    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:52:25.383442    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:52:25.397943    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:52:25.397952    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:52:25.409682    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:52:25.409692    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:52:25.434528    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:52:25.434538    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:52:25.438647    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:52:25.438656    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:52:25.472345    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:52:25.472361    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:52:25.486163    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:52:25.486174    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:52:27.999599    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:52:33.001923    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:52:33.002004    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:52:33.013058    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:52:33.013125    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:52:33.024698    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:52:33.024770    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:52:33.036323    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:52:33.036408    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:52:33.050718    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:52:33.050797    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:52:33.071925    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:52:33.071991    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:52:33.082913    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:52:33.082976    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:52:33.094043    4416 logs.go:282] 0 containers: []
	W1003 20:52:33.094054    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:52:33.094115    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:52:33.106330    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:52:33.106345    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:52:33.106351    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:52:33.122592    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:52:33.122601    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:52:33.134681    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:52:33.134694    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:52:33.149227    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:52:33.149238    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:52:33.169229    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:52:33.169241    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:52:33.196340    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:52:33.196355    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:52:33.234378    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:52:33.234394    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:52:33.239150    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:52:33.239158    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:52:33.252038    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:52:33.252054    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:52:33.264901    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:52:33.264914    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:52:33.279959    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:52:33.279973    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:52:33.295766    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:52:33.295777    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:52:33.310887    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:52:33.310898    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:52:33.323721    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:52:33.323733    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:52:33.361845    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:52:33.361857    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:52:35.881682    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:52:40.883198    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:52:40.883814    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:52:40.926171    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:52:40.926329    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:52:40.948955    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:52:40.949062    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:52:40.964295    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:52:40.964385    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:52:40.976735    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:52:40.976815    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:52:40.988084    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:52:40.988158    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:52:41.000580    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:52:41.000663    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:52:41.011388    4416 logs.go:282] 0 containers: []
	W1003 20:52:41.011399    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:52:41.011469    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:52:41.030762    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:52:41.030781    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:52:41.030791    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:52:41.066527    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:52:41.066537    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:52:41.078877    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:52:41.078887    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:52:41.104664    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:52:41.104677    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:52:41.120163    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:52:41.120172    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:52:41.134732    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:52:41.134742    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:52:41.147041    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:52:41.147051    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:52:41.159069    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:52:41.159078    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:52:41.174534    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:52:41.174543    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:52:41.198938    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:52:41.198945    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:52:41.232051    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:52:41.232059    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:52:41.236012    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:52:41.236019    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:52:41.252123    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:52:41.252131    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:52:41.264035    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:52:41.264046    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:52:41.275909    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:52:41.275919    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:52:43.790323    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:52:48.793324    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:52:48.793895    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:52:48.832314    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:52:48.832460    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:52:48.854955    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:52:48.855072    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:52:48.870758    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:52:48.870849    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:52:48.883812    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:52:48.883905    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:52:48.894336    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:52:48.894409    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:52:48.905088    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:52:48.905162    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:52:48.916121    4416 logs.go:282] 0 containers: []
	W1003 20:52:48.916132    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:52:48.916195    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:52:48.930085    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:52:48.930104    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:52:48.930109    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:52:48.954090    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:52:48.954100    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:52:48.988890    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:52:48.988898    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:52:48.992864    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:52:48.992872    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:52:49.010201    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:52:49.010213    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:52:49.021861    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:52:49.021876    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:52:49.039436    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:52:49.039447    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:52:49.066158    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:52:49.066167    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:52:49.078416    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:52:49.078427    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:52:49.089980    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:52:49.089989    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:52:49.101321    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:52:49.101336    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:52:49.118940    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:52:49.118951    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:52:49.131304    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:52:49.131314    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:52:49.146360    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:52:49.146370    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:52:49.184834    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:52:49.184845    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:52:51.702011    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:52:56.704757    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:52:56.704838    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:52:56.717376    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:52:56.717457    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:52:56.729169    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:52:56.729234    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:52:56.741923    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:52:56.742016    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:52:56.758844    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:52:56.758898    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:52:56.778442    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:52:56.778533    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:52:56.790570    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:52:56.790672    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:52:56.802013    4416 logs.go:282] 0 containers: []
	W1003 20:52:56.802025    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:52:56.802100    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:52:56.813548    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:52:56.813566    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:52:56.813572    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:52:56.828848    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:52:56.828862    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:52:56.841384    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:52:56.841397    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:52:56.866625    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:52:56.866638    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:52:56.878620    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:52:56.878633    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:52:56.890846    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:52:56.890862    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:52:56.907801    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:52:56.907826    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:52:56.925388    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:52:56.925403    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:52:56.938586    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:52:56.938600    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:52:56.943368    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:52:56.943379    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:52:56.955995    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:52:56.956005    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:52:56.969506    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:52:56.969517    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:52:56.995233    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:52:56.995254    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:52:57.031845    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:52:57.031856    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:52:57.068873    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:52:57.068883    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:52:59.585213    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:53:04.587499    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:53:04.587981    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:53:04.618114    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:53:04.618258    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:53:04.635781    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:53:04.635886    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:53:04.649503    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:53:04.649583    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:53:04.660903    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:53:04.660989    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:53:04.671434    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:53:04.671506    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:53:04.681754    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:53:04.681830    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:53:04.693403    4416 logs.go:282] 0 containers: []
	W1003 20:53:04.693416    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:53:04.693479    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:53:04.704298    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:53:04.704315    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:53:04.704320    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:53:04.740861    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:53:04.740877    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:53:04.752694    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:53:04.752704    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:53:04.767294    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:53:04.767303    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:53:04.790250    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:53:04.790258    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:53:04.804160    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:53:04.804171    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:53:04.815751    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:53:04.815760    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:53:04.827485    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:53:04.827494    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:53:04.844218    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:53:04.844226    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:53:04.857999    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:53:04.858014    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:53:04.862467    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:53:04.862476    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:53:04.897258    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:53:04.897274    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:53:04.912432    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:53:04.912443    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:53:04.929398    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:53:04.929406    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:53:04.940946    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:53:04.940956    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:53:07.461705    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:53:12.464552    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:53:12.465024    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:53:12.503981    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:53:12.504159    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:53:12.526590    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:53:12.526700    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:53:12.544023    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:53:12.544119    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:53:12.556819    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:53:12.556898    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:53:12.571609    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:53:12.571687    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:53:12.582927    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:53:12.583000    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:53:12.593896    4416 logs.go:282] 0 containers: []
	W1003 20:53:12.593910    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:53:12.593976    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:53:12.604616    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:53:12.604635    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:53:12.604642    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:53:12.616735    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:53:12.616749    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:53:12.634248    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:53:12.634258    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:53:12.658194    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:53:12.658203    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:53:12.669454    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:53:12.669465    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:53:12.705227    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:53:12.705235    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:53:12.743090    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:53:12.743103    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:53:12.755198    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:53:12.755209    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:53:12.766934    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:53:12.766944    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:53:12.779310    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:53:12.779319    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:53:12.790726    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:53:12.790739    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:53:12.805760    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:53:12.805771    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:53:12.810382    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:53:12.810391    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:53:12.824766    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:53:12.824776    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:53:12.839465    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:53:12.839473    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:53:15.353390    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:53:20.356124    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:53:20.356620    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:53:20.394235    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:53:20.394372    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:53:20.416920    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:53:20.417037    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:53:20.431695    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:53:20.431781    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:53:20.444292    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:53:20.444370    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:53:20.455078    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:53:20.455151    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:53:20.465520    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:53:20.465588    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:53:20.477338    4416 logs.go:282] 0 containers: []
	W1003 20:53:20.477349    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:53:20.477406    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:53:20.487985    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:53:20.488001    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:53:20.488006    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:53:20.500367    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:53:20.500378    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:53:20.512866    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:53:20.512876    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:53:20.536654    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:53:20.536662    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:53:20.547851    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:53:20.547860    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:53:20.562581    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:53:20.562591    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:53:20.576958    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:53:20.576968    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:53:20.592782    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:53:20.592790    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:53:20.608342    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:53:20.608351    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:53:20.643340    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:53:20.643346    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:53:20.647460    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:53:20.647469    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:53:20.659685    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:53:20.659696    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:53:20.677708    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:53:20.677718    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:53:20.711823    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:53:20.711834    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:53:20.727944    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:53:20.727954    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:53:23.241146    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:53:28.243402    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:53:28.243992    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:53:28.284781    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:53:28.284933    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:53:28.309508    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:53:28.309615    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:53:28.323468    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:53:28.323560    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:53:28.335689    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:53:28.335752    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:53:28.347582    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:53:28.347656    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:53:28.366908    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:53:28.366990    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:53:28.377845    4416 logs.go:282] 0 containers: []
	W1003 20:53:28.377861    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:53:28.377917    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:53:28.389278    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:53:28.389298    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:53:28.389305    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:53:28.432599    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:53:28.432613    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:53:28.448924    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:53:28.448938    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:53:28.468839    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:53:28.468848    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:53:28.504044    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:53:28.504054    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:53:28.576819    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:53:28.576829    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:53:28.589414    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:53:28.589425    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:53:28.612005    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:53:28.612015    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:53:28.631757    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:53:28.631767    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:53:28.655432    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:53:28.655438    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:53:28.667304    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:53:28.667314    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:53:28.682534    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:53:28.682547    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:53:28.694000    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:53:28.694015    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:53:28.698495    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:53:28.698503    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:53:28.710147    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:53:28.710160    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:53:31.225456    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:53:36.227651    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:53:36.227892    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:53:36.248361    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:53:36.248461    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:53:36.263162    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:53:36.263255    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:53:36.275763    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:53:36.275833    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:53:36.287552    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:53:36.287623    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:53:36.302986    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:53:36.303051    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:53:36.315681    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:53:36.315738    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:53:36.325567    4416 logs.go:282] 0 containers: []
	W1003 20:53:36.325576    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:53:36.325628    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:53:36.336327    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:53:36.336345    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:53:36.336351    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:53:36.341361    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:53:36.341367    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:53:36.355153    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:53:36.355163    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:53:36.366503    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:53:36.366512    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:53:36.381836    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:53:36.381847    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:53:36.394694    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:53:36.394704    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:53:36.408527    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:53:36.408538    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:53:36.431172    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:53:36.431182    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:53:36.442893    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:53:36.442903    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:53:36.479244    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:53:36.479259    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:53:36.494825    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:53:36.494839    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:53:36.505950    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:53:36.505959    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:53:36.530193    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:53:36.530202    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:53:36.565586    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:53:36.565595    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:53:36.578285    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:53:36.578298    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:53:39.098443    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:53:44.100977    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:53:44.101322    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1003 20:53:44.130042    4416 logs.go:282] 1 containers: [1830ea43027c]
	I1003 20:53:44.130190    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1003 20:53:44.148715    4416 logs.go:282] 1 containers: [1444db8da9e8]
	I1003 20:53:44.148817    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1003 20:53:44.163312    4416 logs.go:282] 4 containers: [f5a31d25caeb 18cabdbc2554 6add665ec5b3 02baafe22d8e]
	I1003 20:53:44.163398    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1003 20:53:44.183316    4416 logs.go:282] 1 containers: [6b435028f524]
	I1003 20:53:44.183391    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1003 20:53:44.193988    4416 logs.go:282] 1 containers: [2702f679fac0]
	I1003 20:53:44.194068    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1003 20:53:44.204547    4416 logs.go:282] 1 containers: [bb4edd831b05]
	I1003 20:53:44.204626    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1003 20:53:44.215165    4416 logs.go:282] 0 containers: []
	W1003 20:53:44.215177    4416 logs.go:284] No container was found matching "kindnet"
	I1003 20:53:44.215242    4416 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1003 20:53:44.225644    4416 logs.go:282] 1 containers: [cdc1e3e14a1a]
	I1003 20:53:44.225662    4416 logs.go:123] Gathering logs for container status ...
	I1003 20:53:44.225667    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 20:53:44.237512    4416 logs.go:123] Gathering logs for kubelet ...
	I1003 20:53:44.237522    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 20:53:44.271603    4416 logs.go:123] Gathering logs for dmesg ...
	I1003 20:53:44.271610    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 20:53:44.275575    4416 logs.go:123] Gathering logs for describe nodes ...
	I1003 20:53:44.275583    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 20:53:44.312526    4416 logs.go:123] Gathering logs for coredns [6add665ec5b3] ...
	I1003 20:53:44.312536    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6add665ec5b3"
	I1003 20:53:44.324148    4416 logs.go:123] Gathering logs for coredns [02baafe22d8e] ...
	I1003 20:53:44.324163    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02baafe22d8e"
	I1003 20:53:44.335347    4416 logs.go:123] Gathering logs for kube-scheduler [6b435028f524] ...
	I1003 20:53:44.335360    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b435028f524"
	I1003 20:53:44.349869    4416 logs.go:123] Gathering logs for storage-provisioner [cdc1e3e14a1a] ...
	I1003 20:53:44.349879    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc1e3e14a1a"
	I1003 20:53:44.361222    4416 logs.go:123] Gathering logs for Docker ...
	I1003 20:53:44.361233    4416 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1003 20:53:44.383887    4416 logs.go:123] Gathering logs for kube-apiserver [1830ea43027c] ...
	I1003 20:53:44.383896    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1830ea43027c"
	I1003 20:53:44.397667    4416 logs.go:123] Gathering logs for coredns [18cabdbc2554] ...
	I1003 20:53:44.397677    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cabdbc2554"
	I1003 20:53:44.408924    4416 logs.go:123] Gathering logs for kube-proxy [2702f679fac0] ...
	I1003 20:53:44.408933    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2702f679fac0"
	I1003 20:53:44.421121    4416 logs.go:123] Gathering logs for kube-controller-manager [bb4edd831b05] ...
	I1003 20:53:44.421135    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb4edd831b05"
	I1003 20:53:44.443655    4416 logs.go:123] Gathering logs for etcd [1444db8da9e8] ...
	I1003 20:53:44.443665    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1444db8da9e8"
	I1003 20:53:44.457173    4416 logs.go:123] Gathering logs for coredns [f5a31d25caeb] ...
	I1003 20:53:44.457183    4416 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a31d25caeb"
	I1003 20:53:46.969854    4416 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1003 20:53:51.972275    4416 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1003 20:53:51.978488    4416 out.go:201] 
	W1003 20:53:51.983490    4416 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1003 20:53:51.983524    4416 out.go:270] * 
	* 
	W1003 20:53:51.985354    4416 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:53:51.994472    4416 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-455000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (580.65s)

                                                
                                    
x
+
TestPause/serial/Start (9.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-073000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-073000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.904493125s)

                                                
                                                
-- stdout --
	* [pause-073000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-073000" primary control-plane node in "pause-073000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-073000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-073000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-073000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-073000 -n pause-073000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-073000 -n pause-073000: exit status 7 (58.620583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-073000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-752000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-752000 --driver=qemu2 : exit status 80 (9.897709s)

                                                
                                                
-- stdout --
	* [NoKubernetes-752000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-752000" primary control-plane node in "NoKubernetes-752000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-752000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-752000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-752000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-752000 -n NoKubernetes-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-752000 -n NoKubernetes-752000: exit status 7 (56.489209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-752000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-752000 --no-kubernetes --driver=qemu2 : exit status 80 (6.29346625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-752000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-752000
	* Restarting existing qemu2 VM for "NoKubernetes-752000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-752000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-752000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-752000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-752000 -n NoKubernetes-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-752000 -n NoKubernetes-752000: exit status 7 (69.4355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (6.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-752000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-752000 --no-kubernetes --driver=qemu2 : exit status 80 (5.773666291s)

                                                
                                                
-- stdout --
	* [NoKubernetes-752000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-752000
	* Restarting existing qemu2 VM for "NoKubernetes-752000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-752000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-752000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-752000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-752000 -n NoKubernetes-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-752000 -n NoKubernetes-752000: exit status 7 (56.425416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-752000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-752000 --driver=qemu2 : exit status 80 (5.811748084s)

                                                
                                                
-- stdout --
	* [NoKubernetes-752000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-752000
	* Restarting existing qemu2 VM for "NoKubernetes-752000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-752000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-752000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-752000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-752000 -n NoKubernetes-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-752000 -n NoKubernetes-752000: exit status 7 (52.643083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.703628083s)

                                                
                                                
-- stdout --
	* [auto-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-783000" primary control-plane node in "auto-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:52:22.896964    4595 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:52:22.897134    4595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:52:22.897137    4595 out.go:358] Setting ErrFile to fd 2...
	I1003 20:52:22.897139    4595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:52:22.897270    4595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:52:22.898499    4595 out.go:352] Setting JSON to false
	I1003 20:52:22.916730    4595 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4913,"bootTime":1728009029,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:52:22.916800    4595 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:52:22.921791    4595 out.go:177] * [auto-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:52:22.929713    4595 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:52:22.929749    4595 notify.go:220] Checking for updates...
	I1003 20:52:22.936707    4595 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:52:22.939662    4595 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:52:22.942713    4595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:52:22.945731    4595 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:52:22.948650    4595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:52:22.952113    4595 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:52:22.952187    4595 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:52:22.952231    4595 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:52:22.956654    4595 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:52:22.963661    4595 start.go:297] selected driver: qemu2
	I1003 20:52:22.963668    4595 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:52:22.963673    4595 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:52:22.966394    4595 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:52:22.971964    4595 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:52:22.975752    4595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:52:22.975772    4595 cni.go:84] Creating CNI manager for ""
	I1003 20:52:22.975793    4595 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:52:22.975800    4595 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:52:22.975834    4595 start.go:340] cluster config:
	{Name:auto-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:52:22.980670    4595 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:52:22.990655    4595 out.go:177] * Starting "auto-783000" primary control-plane node in "auto-783000" cluster
	I1003 20:52:22.993569    4595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:52:22.993583    4595 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:52:22.993592    4595 cache.go:56] Caching tarball of preloaded images
	I1003 20:52:22.993669    4595 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:52:22.993674    4595 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:52:22.993779    4595 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/auto-783000/config.json ...
	I1003 20:52:22.993791    4595 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/auto-783000/config.json: {Name:mk6028ac7d933ab6cc75c00a29dbcd4dafff0b4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:52:22.994015    4595 start.go:360] acquireMachinesLock for auto-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:52:22.994057    4595 start.go:364] duration metric: took 37µs to acquireMachinesLock for "auto-783000"
	I1003 20:52:22.994068    4595 start.go:93] Provisioning new machine with config: &{Name:auto-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:auto-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:52:22.994096    4595 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:52:22.997736    4595 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:52:23.012498    4595 start.go:159] libmachine.API.Create for "auto-783000" (driver="qemu2")
	I1003 20:52:23.012525    4595 client.go:168] LocalClient.Create starting
	I1003 20:52:23.012599    4595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:52:23.012635    4595 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:23.012650    4595 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:23.012697    4595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:52:23.012726    4595 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:23.012736    4595 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:23.013090    4595 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:52:23.149225    4595 main.go:141] libmachine: Creating SSH key...
	I1003 20:52:23.197288    4595 main.go:141] libmachine: Creating Disk image...
	I1003 20:52:23.197294    4595 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:52:23.197480    4595 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2
	I1003 20:52:23.207442    4595 main.go:141] libmachine: STDOUT: 
	I1003 20:52:23.207462    4595 main.go:141] libmachine: STDERR: 
	I1003 20:52:23.207539    4595 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2 +20000M
	I1003 20:52:23.216356    4595 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:52:23.216371    4595 main.go:141] libmachine: STDERR: 
	I1003 20:52:23.216388    4595 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2
	I1003 20:52:23.216395    4595 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:52:23.216406    4595 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:52:23.216433    4595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:b6:3d:6b:8b:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2
	I1003 20:52:23.218435    4595 main.go:141] libmachine: STDOUT: 
	I1003 20:52:23.218450    4595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:52:23.218470    4595 client.go:171] duration metric: took 205.93875ms to LocalClient.Create
	I1003 20:52:25.220585    4595 start.go:128] duration metric: took 2.226475625s to createHost
	I1003 20:52:25.220609    4595 start.go:83] releasing machines lock for "auto-783000", held for 2.22654725s
	W1003 20:52:25.220634    4595 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:52:25.228179    4595 out.go:177] * Deleting "auto-783000" in qemu2 ...
	W1003 20:52:25.236779    4595 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:52:25.236791    4595 start.go:729] Will try again in 5 seconds ...
	I1003 20:52:30.238889    4595 start.go:360] acquireMachinesLock for auto-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:52:30.239110    4595 start.go:364] duration metric: took 194.166µs to acquireMachinesLock for "auto-783000"
	I1003 20:52:30.239136    4595 start.go:93] Provisioning new machine with config: &{Name:auto-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:auto-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:52:30.239249    4595 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:52:30.252625    4595 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:52:30.279547    4595 start.go:159] libmachine.API.Create for "auto-783000" (driver="qemu2")
	I1003 20:52:30.279581    4595 client.go:168] LocalClient.Create starting
	I1003 20:52:30.279704    4595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:52:30.279759    4595 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:30.279776    4595 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:30.279822    4595 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:52:30.279862    4595 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:30.279871    4595 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:30.280345    4595 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:52:30.421996    4595 main.go:141] libmachine: Creating SSH key...
	I1003 20:52:30.508359    4595 main.go:141] libmachine: Creating Disk image...
	I1003 20:52:30.508368    4595 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:52:30.508586    4595 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2
	I1003 20:52:30.518703    4595 main.go:141] libmachine: STDOUT: 
	I1003 20:52:30.518729    4595 main.go:141] libmachine: STDERR: 
	I1003 20:52:30.518780    4595 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2 +20000M
	I1003 20:52:30.527465    4595 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:52:30.527485    4595 main.go:141] libmachine: STDERR: 
	I1003 20:52:30.527500    4595 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2
	I1003 20:52:30.527506    4595 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:52:30.527512    4595 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:52:30.527559    4595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:38:75:d0:33:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/auto-783000/disk.qcow2
	I1003 20:52:30.529500    4595 main.go:141] libmachine: STDOUT: 
	I1003 20:52:30.529516    4595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:52:30.529528    4595 client.go:171] duration metric: took 249.938125ms to LocalClient.Create
	I1003 20:52:32.531731    4595 start.go:128] duration metric: took 2.292446958s to createHost
	I1003 20:52:32.531847    4595 start.go:83] releasing machines lock for "auto-783000", held for 2.2927225s
	W1003 20:52:32.532271    4595 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:52:32.544077    4595 out.go:201] 
	W1003 20:52:32.548059    4595 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:52:32.548079    4595 out.go:270] * 
	* 
	W1003 20:52:32.549846    4595 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:52:32.558976    4595 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E1003 20:52:38.540036    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.7574615s)

                                                
                                                
-- stdout --
	* [kindnet-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-783000" primary control-plane node in "kindnet-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:52:34.802174    4704 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:52:34.802339    4704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:52:34.802342    4704 out.go:358] Setting ErrFile to fd 2...
	I1003 20:52:34.802345    4704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:52:34.802508    4704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:52:34.803677    4704 out.go:352] Setting JSON to false
	I1003 20:52:34.821450    4704 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4925,"bootTime":1728009029,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:52:34.821555    4704 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:52:34.826665    4704 out.go:177] * [kindnet-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:52:34.834895    4704 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:52:34.834951    4704 notify.go:220] Checking for updates...
	I1003 20:52:34.841875    4704 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:52:34.844869    4704 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:52:34.847826    4704 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:52:34.850892    4704 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:52:34.853874    4704 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:52:34.855635    4704 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:52:34.855713    4704 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:52:34.855752    4704 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:52:34.859844    4704 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:52:34.866722    4704 start.go:297] selected driver: qemu2
	I1003 20:52:34.866730    4704 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:52:34.866737    4704 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:52:34.869232    4704 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:52:34.871838    4704 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:52:34.874889    4704 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:52:34.874907    4704 cni.go:84] Creating CNI manager for "kindnet"
	I1003 20:52:34.874911    4704 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 20:52:34.874933    4704 start.go:340] cluster config:
	{Name:kindnet-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/soc
ket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:52:34.879570    4704 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:52:34.887844    4704 out.go:177] * Starting "kindnet-783000" primary control-plane node in "kindnet-783000" cluster
	I1003 20:52:34.891850    4704 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:52:34.891866    4704 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:52:34.891878    4704 cache.go:56] Caching tarball of preloaded images
	I1003 20:52:34.891972    4704 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:52:34.891979    4704 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:52:34.892047    4704 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/kindnet-783000/config.json ...
	I1003 20:52:34.892059    4704 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/kindnet-783000/config.json: {Name:mk5ec67b0af7157fdc8d81de03b375423f0ec8e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:52:34.892424    4704 start.go:360] acquireMachinesLock for kindnet-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:52:34.892476    4704 start.go:364] duration metric: took 45.916µs to acquireMachinesLock for "kindnet-783000"
	I1003 20:52:34.892490    4704 start.go:93] Provisioning new machine with config: &{Name:kindnet-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kindnet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:52:34.892524    4704 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:52:34.896858    4704 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:52:34.914399    4704 start.go:159] libmachine.API.Create for "kindnet-783000" (driver="qemu2")
	I1003 20:52:34.914433    4704 client.go:168] LocalClient.Create starting
	I1003 20:52:34.914500    4704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:52:34.914559    4704 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:34.914573    4704 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:34.914620    4704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:52:34.914650    4704 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:34.914658    4704 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:34.915130    4704 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:52:35.054787    4704 main.go:141] libmachine: Creating SSH key...
	I1003 20:52:35.132397    4704 main.go:141] libmachine: Creating Disk image...
	I1003 20:52:35.132408    4704 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:52:35.132610    4704 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2
	I1003 20:52:35.142733    4704 main.go:141] libmachine: STDOUT: 
	I1003 20:52:35.142755    4704 main.go:141] libmachine: STDERR: 
	I1003 20:52:35.142814    4704 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2 +20000M
	I1003 20:52:35.151448    4704 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:52:35.151463    4704 main.go:141] libmachine: STDERR: 
	I1003 20:52:35.151475    4704 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2
	I1003 20:52:35.151481    4704 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:52:35.151492    4704 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:52:35.151528    4704 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:94:16:49:26:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2
	I1003 20:52:35.153260    4704 main.go:141] libmachine: STDOUT: 
	I1003 20:52:35.153274    4704 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:52:35.153294    4704 client.go:171] duration metric: took 238.855292ms to LocalClient.Create
	I1003 20:52:37.155408    4704 start.go:128] duration metric: took 2.262868083s to createHost
	I1003 20:52:37.155453    4704 start.go:83] releasing machines lock for "kindnet-783000", held for 2.2629715s
	W1003 20:52:37.155491    4704 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:52:37.169999    4704 out.go:177] * Deleting "kindnet-783000" in qemu2 ...
	W1003 20:52:37.183504    4704 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:52:37.183516    4704 start.go:729] Will try again in 5 seconds ...
	I1003 20:52:42.185628    4704 start.go:360] acquireMachinesLock for kindnet-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:52:42.185835    4704 start.go:364] duration metric: took 175.209µs to acquireMachinesLock for "kindnet-783000"
	I1003 20:52:42.185855    4704 start.go:93] Provisioning new machine with config: &{Name:kindnet-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kindnet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:52:42.185945    4704 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:52:42.195251    4704 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:52:42.212193    4704 start.go:159] libmachine.API.Create for "kindnet-783000" (driver="qemu2")
	I1003 20:52:42.212226    4704 client.go:168] LocalClient.Create starting
	I1003 20:52:42.212309    4704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:52:42.212351    4704 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:42.212361    4704 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:42.212395    4704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:52:42.212427    4704 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:42.212433    4704 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:42.212819    4704 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:52:42.352604    4704 main.go:141] libmachine: Creating SSH key...
	I1003 20:52:42.454801    4704 main.go:141] libmachine: Creating Disk image...
	I1003 20:52:42.454807    4704 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:52:42.455018    4704 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2
	I1003 20:52:42.465035    4704 main.go:141] libmachine: STDOUT: 
	I1003 20:52:42.465097    4704 main.go:141] libmachine: STDERR: 
	I1003 20:52:42.465158    4704 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2 +20000M
	I1003 20:52:42.473643    4704 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:52:42.473684    4704 main.go:141] libmachine: STDERR: 
	I1003 20:52:42.473701    4704 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2
	I1003 20:52:42.473708    4704 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:52:42.473717    4704 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:52:42.473744    4704 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:6a:e9:3f:d3:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kindnet-783000/disk.qcow2
	I1003 20:52:42.475560    4704 main.go:141] libmachine: STDOUT: 
	I1003 20:52:42.475574    4704 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:52:42.475586    4704 client.go:171] duration metric: took 263.356041ms to LocalClient.Create
	I1003 20:52:44.477794    4704 start.go:128] duration metric: took 2.291819625s to createHost
	I1003 20:52:44.477910    4704 start.go:83] releasing machines lock for "kindnet-783000", held for 2.292063667s
	W1003 20:52:44.478231    4704 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:52:44.500075    4704 out.go:201] 
	W1003 20:52:44.503019    4704 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:52:44.503043    4704 out.go:270] * 
	* 
	W1003 20:52:44.505521    4704 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:52:44.515989    4704 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
E1003 20:52:51.694543    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.794306583s)

                                                
                                                
-- stdout --
	* [calico-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-783000" primary control-plane node in "calico-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:52:46.843801    4817 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:52:46.843938    4817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:52:46.843942    4817 out.go:358] Setting ErrFile to fd 2...
	I1003 20:52:46.843944    4817 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:52:46.844074    4817 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:52:46.845204    4817 out.go:352] Setting JSON to false
	I1003 20:52:46.863960    4817 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4937,"bootTime":1728009029,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:52:46.864029    4817 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:52:46.869873    4817 out.go:177] * [calico-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:52:46.877871    4817 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:52:46.877931    4817 notify.go:220] Checking for updates...
	I1003 20:52:46.884873    4817 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:52:46.887906    4817 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:52:46.890926    4817 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:52:46.893931    4817 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:52:46.896889    4817 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:52:46.900324    4817 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:52:46.900399    4817 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:52:46.900447    4817 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:52:46.904850    4817 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:52:46.911845    4817 start.go:297] selected driver: qemu2
	I1003 20:52:46.911850    4817 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:52:46.911856    4817 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:52:46.914481    4817 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:52:46.917811    4817 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:52:46.920997    4817 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:52:46.921020    4817 cni.go:84] Creating CNI manager for "calico"
	I1003 20:52:46.921031    4817 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1003 20:52:46.921074    4817 start.go:340] cluster config:
	{Name:calico-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:52:46.925874    4817 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:52:46.933831    4817 out.go:177] * Starting "calico-783000" primary control-plane node in "calico-783000" cluster
	I1003 20:52:46.937860    4817 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:52:46.937878    4817 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:52:46.937892    4817 cache.go:56] Caching tarball of preloaded images
	I1003 20:52:46.937989    4817 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:52:46.937995    4817 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:52:46.938063    4817 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/calico-783000/config.json ...
	I1003 20:52:46.938075    4817 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/calico-783000/config.json: {Name:mk2416ca25d08c69d6e97aa2b277ce0969e21aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:52:46.938363    4817 start.go:360] acquireMachinesLock for calico-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:52:46.938414    4817 start.go:364] duration metric: took 45µs to acquireMachinesLock for "calico-783000"
	I1003 20:52:46.938426    4817 start.go:93] Provisioning new machine with config: &{Name:calico-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:calico-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:52:46.938487    4817 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:52:46.941903    4817 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:52:46.958855    4817 start.go:159] libmachine.API.Create for "calico-783000" (driver="qemu2")
	I1003 20:52:46.958889    4817 client.go:168] LocalClient.Create starting
	I1003 20:52:46.958971    4817 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:52:46.959007    4817 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:46.959016    4817 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:46.959063    4817 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:52:46.959091    4817 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:46.959099    4817 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:46.959507    4817 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:52:47.097583    4817 main.go:141] libmachine: Creating SSH key...
	I1003 20:52:47.225496    4817 main.go:141] libmachine: Creating Disk image...
	I1003 20:52:47.225505    4817 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:52:47.225696    4817 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2
	I1003 20:52:47.235627    4817 main.go:141] libmachine: STDOUT: 
	I1003 20:52:47.235650    4817 main.go:141] libmachine: STDERR: 
	I1003 20:52:47.235714    4817 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2 +20000M
	I1003 20:52:47.244416    4817 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:52:47.244431    4817 main.go:141] libmachine: STDERR: 
	I1003 20:52:47.244443    4817 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2
	I1003 20:52:47.244449    4817 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:52:47.244462    4817 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:52:47.244495    4817 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:87:b8:c9:ee:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2
	I1003 20:52:47.246245    4817 main.go:141] libmachine: STDOUT: 
	I1003 20:52:47.246261    4817 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:52:47.246282    4817 client.go:171] duration metric: took 287.386708ms to LocalClient.Create
	I1003 20:52:49.248488    4817 start.go:128] duration metric: took 2.309981667s to createHost
	I1003 20:52:49.248509    4817 start.go:83] releasing machines lock for "calico-783000", held for 2.310090292s
	W1003 20:52:49.248537    4817 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:52:49.261072    4817 out.go:177] * Deleting "calico-783000" in qemu2 ...
	W1003 20:52:49.269879    4817 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:52:49.269885    4817 start.go:729] Will try again in 5 seconds ...
	I1003 20:52:54.271585    4817 start.go:360] acquireMachinesLock for calico-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:52:54.271831    4817 start.go:364] duration metric: took 203.208µs to acquireMachinesLock for "calico-783000"
	I1003 20:52:54.271857    4817 start.go:93] Provisioning new machine with config: &{Name:calico-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:calico-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:52:54.271941    4817 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:52:54.280071    4817 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:52:54.305667    4817 start.go:159] libmachine.API.Create for "calico-783000" (driver="qemu2")
	I1003 20:52:54.305711    4817 client.go:168] LocalClient.Create starting
	I1003 20:52:54.305821    4817 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:52:54.305879    4817 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:54.305895    4817 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:54.305943    4817 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:52:54.305982    4817 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:54.305994    4817 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:54.306465    4817 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:52:54.447479    4817 main.go:141] libmachine: Creating SSH key...
	I1003 20:52:54.547307    4817 main.go:141] libmachine: Creating Disk image...
	I1003 20:52:54.547316    4817 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:52:54.547551    4817 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2
	I1003 20:52:54.557672    4817 main.go:141] libmachine: STDOUT: 
	I1003 20:52:54.557699    4817 main.go:141] libmachine: STDERR: 
	I1003 20:52:54.557760    4817 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2 +20000M
	I1003 20:52:54.566622    4817 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:52:54.566639    4817 main.go:141] libmachine: STDERR: 
	I1003 20:52:54.566661    4817 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2
	I1003 20:52:54.566665    4817 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:52:54.566675    4817 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:52:54.566709    4817 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:33:48:1a:dd:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/calico-783000/disk.qcow2
	I1003 20:52:54.568580    4817 main.go:141] libmachine: STDOUT: 
	I1003 20:52:54.568596    4817 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:52:54.568617    4817 client.go:171] duration metric: took 262.901ms to LocalClient.Create
	I1003 20:52:56.570823    4817 start.go:128] duration metric: took 2.298847834s to createHost
	I1003 20:52:56.570954    4817 start.go:83] releasing machines lock for "calico-783000", held for 2.299096042s
	W1003 20:52:56.571325    4817 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:52:56.580968    4817 out.go:201] 
	W1003 20:52:56.586066    4817 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:52:56.586091    4817 out.go:270] * 
	* 
	W1003 20:52:56.589008    4817 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:52:56.596069    4817 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.865878542s)

                                                
                                                
-- stdout --
	* [custom-flannel-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-783000" primary control-plane node in "custom-flannel-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:52:59.124935    4935 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:52:59.125097    4935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:52:59.125100    4935 out.go:358] Setting ErrFile to fd 2...
	I1003 20:52:59.125103    4935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:52:59.125229    4935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:52:59.126475    4935 out.go:352] Setting JSON to false
	I1003 20:52:59.144847    4935 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4950,"bootTime":1728009029,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:52:59.144921    4935 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:52:59.150618    4935 out.go:177] * [custom-flannel-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:52:59.157678    4935 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:52:59.157769    4935 notify.go:220] Checking for updates...
	I1003 20:52:59.165453    4935 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:52:59.168539    4935 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:52:59.174557    4935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:52:59.177568    4935 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:52:59.180544    4935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:52:59.182256    4935 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:52:59.182327    4935 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:52:59.182368    4935 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:52:59.186501    4935 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:52:59.193516    4935 start.go:297] selected driver: qemu2
	I1003 20:52:59.193522    4935 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:52:59.193527    4935 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:52:59.196052    4935 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:52:59.199613    4935 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:52:59.202715    4935 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:52:59.202734    4935 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1003 20:52:59.202750    4935 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1003 20:52:59.202800    4935 start.go:340] cluster config:
	{Name:custom-flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:52:59.207125    4935 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:52:59.215611    4935 out.go:177] * Starting "custom-flannel-783000" primary control-plane node in "custom-flannel-783000" cluster
	I1003 20:52:59.219552    4935 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:52:59.219564    4935 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:52:59.219574    4935 cache.go:56] Caching tarball of preloaded images
	I1003 20:52:59.219642    4935 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:52:59.219647    4935 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:52:59.219707    4935 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/custom-flannel-783000/config.json ...
	I1003 20:52:59.219717    4935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/custom-flannel-783000/config.json: {Name:mka7be3ce1fb955aa77b395df83c3a63c770c4c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:52:59.219991    4935 start.go:360] acquireMachinesLock for custom-flannel-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:52:59.220040    4935 start.go:364] duration metric: took 42.708µs to acquireMachinesLock for "custom-flannel-783000"
	I1003 20:52:59.220052    4935 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:52:59.220077    4935 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:52:59.224586    4935 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:52:59.239573    4935 start.go:159] libmachine.API.Create for "custom-flannel-783000" (driver="qemu2")
	I1003 20:52:59.239595    4935 client.go:168] LocalClient.Create starting
	I1003 20:52:59.239662    4935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:52:59.239699    4935 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:59.239711    4935 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:59.239759    4935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:52:59.239787    4935 main.go:141] libmachine: Decoding PEM data...
	I1003 20:52:59.239792    4935 main.go:141] libmachine: Parsing certificate...
	I1003 20:52:59.240270    4935 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:52:59.378662    4935 main.go:141] libmachine: Creating SSH key...
	I1003 20:52:59.431149    4935 main.go:141] libmachine: Creating Disk image...
	I1003 20:52:59.431155    4935 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:52:59.431365    4935 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2
	I1003 20:52:59.441400    4935 main.go:141] libmachine: STDOUT: 
	I1003 20:52:59.441425    4935 main.go:141] libmachine: STDERR: 
	I1003 20:52:59.441475    4935 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2 +20000M
	I1003 20:52:59.450242    4935 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:52:59.450256    4935 main.go:141] libmachine: STDERR: 
	I1003 20:52:59.450275    4935 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2
	I1003 20:52:59.450279    4935 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:52:59.450290    4935 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:52:59.450314    4935 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:0f:68:28:ef:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2
	I1003 20:52:59.452238    4935 main.go:141] libmachine: STDOUT: 
	I1003 20:52:59.452254    4935 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:52:59.452273    4935 client.go:171] duration metric: took 212.671833ms to LocalClient.Create
	I1003 20:53:01.454499    4935 start.go:128] duration metric: took 2.234400625s to createHost
	I1003 20:53:01.454578    4935 start.go:83] releasing machines lock for "custom-flannel-783000", held for 2.23451s
	W1003 20:53:01.454618    4935 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:01.464921    4935 out.go:177] * Deleting "custom-flannel-783000" in qemu2 ...
	W1003 20:53:01.484517    4935 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:01.484534    4935 start.go:729] Will try again in 5 seconds ...
	I1003 20:53:06.486726    4935 start.go:360] acquireMachinesLock for custom-flannel-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:53:06.487414    4935 start.go:364] duration metric: took 561.791µs to acquireMachinesLock for "custom-flannel-783000"
	I1003 20:53:06.487570    4935 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:53:06.487875    4935 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:53:06.497590    4935 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:53:06.547112    4935 start.go:159] libmachine.API.Create for "custom-flannel-783000" (driver="qemu2")
	I1003 20:53:06.547175    4935 client.go:168] LocalClient.Create starting
	I1003 20:53:06.547324    4935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:53:06.547410    4935 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:06.547424    4935 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:06.547499    4935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:53:06.547554    4935 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:06.547564    4935 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:06.548165    4935 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:53:06.697835    4935 main.go:141] libmachine: Creating SSH key...
	I1003 20:53:06.897788    4935 main.go:141] libmachine: Creating Disk image...
	I1003 20:53:06.897802    4935 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:53:06.898044    4935 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2
	I1003 20:53:06.908538    4935 main.go:141] libmachine: STDOUT: 
	I1003 20:53:06.908556    4935 main.go:141] libmachine: STDERR: 
	I1003 20:53:06.908619    4935 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2 +20000M
	I1003 20:53:06.917141    4935 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:53:06.917160    4935 main.go:141] libmachine: STDERR: 
	I1003 20:53:06.917174    4935 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2
	I1003 20:53:06.917178    4935 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:53:06.917187    4935 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:53:06.917213    4935 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:3b:62:cd:b4:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/custom-flannel-783000/disk.qcow2
	I1003 20:53:06.919042    4935 main.go:141] libmachine: STDOUT: 
	I1003 20:53:06.919055    4935 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:53:06.919069    4935 client.go:171] duration metric: took 371.888458ms to LocalClient.Create
	I1003 20:53:08.921279    4935 start.go:128] duration metric: took 2.433351666s to createHost
	I1003 20:53:08.921381    4935 start.go:83] releasing machines lock for "custom-flannel-783000", held for 2.433936583s
	W1003 20:53:08.921766    4935 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:08.933364    4935 out.go:201] 
	W1003 20:53:08.937565    4935 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:53:08.937608    4935 out.go:270] * 
	* 
	W1003 20:53:08.940557    4935 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:53:08.948456    4935 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (10.017093209s)

                                                
                                                
-- stdout --
	* [false-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-783000" primary control-plane node in "false-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:53:11.418651    5052 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:53:11.418792    5052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:11.418795    5052 out.go:358] Setting ErrFile to fd 2...
	I1003 20:53:11.418798    5052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:11.418920    5052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:53:11.420143    5052 out.go:352] Setting JSON to false
	I1003 20:53:11.438712    5052 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4962,"bootTime":1728009029,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:53:11.438802    5052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:53:11.442234    5052 out.go:177] * [false-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:53:11.449206    5052 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:53:11.449252    5052 notify.go:220] Checking for updates...
	I1003 20:53:11.455140    5052 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:53:11.458182    5052 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:53:11.461076    5052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:53:11.464188    5052 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:53:11.467168    5052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:53:11.468892    5052 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:53:11.468966    5052 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:53:11.469006    5052 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:53:11.473186    5052 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:53:11.480004    5052 start.go:297] selected driver: qemu2
	I1003 20:53:11.480011    5052 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:53:11.480016    5052 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:53:11.482691    5052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:53:11.486131    5052 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:53:11.489218    5052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:53:11.489237    5052 cni.go:84] Creating CNI manager for "false"
	I1003 20:53:11.489271    5052 start.go:340] cluster config:
	{Name:false-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet
_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:53:11.494395    5052 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:53:11.503158    5052 out.go:177] * Starting "false-783000" primary control-plane node in "false-783000" cluster
	I1003 20:53:11.507134    5052 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:53:11.507151    5052 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:53:11.507163    5052 cache.go:56] Caching tarball of preloaded images
	I1003 20:53:11.507261    5052 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:53:11.507266    5052 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:53:11.507340    5052 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/false-783000/config.json ...
	I1003 20:53:11.507350    5052 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/false-783000/config.json: {Name:mk70f09abd86bf685e200dd1d382ace86e1a7a26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:53:11.507627    5052 start.go:360] acquireMachinesLock for false-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:53:11.507672    5052 start.go:364] duration metric: took 39.5µs to acquireMachinesLock for "false-783000"
	I1003 20:53:11.507683    5052 start.go:93] Provisioning new machine with config: &{Name:false-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:false-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:53:11.507718    5052 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:53:11.512148    5052 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:53:11.528057    5052 start.go:159] libmachine.API.Create for "false-783000" (driver="qemu2")
	I1003 20:53:11.528087    5052 client.go:168] LocalClient.Create starting
	I1003 20:53:11.528161    5052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:53:11.528197    5052 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:11.528212    5052 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:11.528259    5052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:53:11.528288    5052 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:11.528297    5052 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:11.528708    5052 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:53:11.666715    5052 main.go:141] libmachine: Creating SSH key...
	I1003 20:53:11.793919    5052 main.go:141] libmachine: Creating Disk image...
	I1003 20:53:11.793929    5052 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:53:11.794120    5052 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2
	I1003 20:53:11.804096    5052 main.go:141] libmachine: STDOUT: 
	I1003 20:53:11.804115    5052 main.go:141] libmachine: STDERR: 
	I1003 20:53:11.804173    5052 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2 +20000M
	I1003 20:53:11.812612    5052 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:53:11.812628    5052 main.go:141] libmachine: STDERR: 
	I1003 20:53:11.812649    5052 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2
	I1003 20:53:11.812654    5052 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:53:11.812665    5052 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:53:11.812693    5052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:9e:8c:50:e8:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2
	I1003 20:53:11.814565    5052 main.go:141] libmachine: STDOUT: 
	I1003 20:53:11.814578    5052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:53:11.814596    5052 client.go:171] duration metric: took 286.501708ms to LocalClient.Create
	I1003 20:53:13.816880    5052 start.go:128] duration metric: took 2.30912125s to createHost
	I1003 20:53:13.816997    5052 start.go:83] releasing machines lock for "false-783000", held for 2.309314333s
	W1003 20:53:13.817049    5052 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:13.830172    5052 out.go:177] * Deleting "false-783000" in qemu2 ...
	W1003 20:53:13.853790    5052 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:13.853826    5052 start.go:729] Will try again in 5 seconds ...
	I1003 20:53:18.856142    5052 start.go:360] acquireMachinesLock for false-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:53:18.856866    5052 start.go:364] duration metric: took 574.875µs to acquireMachinesLock for "false-783000"
	I1003 20:53:18.857014    5052 start.go:93] Provisioning new machine with config: &{Name:false-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:false-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:53:18.857307    5052 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:53:18.868015    5052 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:53:18.917212    5052 start.go:159] libmachine.API.Create for "false-783000" (driver="qemu2")
	I1003 20:53:18.917272    5052 client.go:168] LocalClient.Create starting
	I1003 20:53:18.917423    5052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:53:18.917504    5052 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:18.917525    5052 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:18.917590    5052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:53:18.917648    5052 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:18.917661    5052 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:18.918331    5052 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:53:19.066567    5052 main.go:141] libmachine: Creating SSH key...
	I1003 20:53:19.344759    5052 main.go:141] libmachine: Creating Disk image...
	I1003 20:53:19.344772    5052 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:53:19.345004    5052 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2
	I1003 20:53:19.355195    5052 main.go:141] libmachine: STDOUT: 
	I1003 20:53:19.355224    5052 main.go:141] libmachine: STDERR: 
	I1003 20:53:19.355292    5052 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2 +20000M
	I1003 20:53:19.363908    5052 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:53:19.363931    5052 main.go:141] libmachine: STDERR: 
	I1003 20:53:19.363946    5052 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2
	I1003 20:53:19.363955    5052 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:53:19.363962    5052 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:53:19.363997    5052 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:fc:6c:6f:a9:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/false-783000/disk.qcow2
	I1003 20:53:19.365801    5052 main.go:141] libmachine: STDOUT: 
	I1003 20:53:19.365813    5052 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:53:19.365828    5052 client.go:171] duration metric: took 448.549208ms to LocalClient.Create
	I1003 20:53:21.368015    5052 start.go:128] duration metric: took 2.510642167s to createHost
	I1003 20:53:21.368122    5052 start.go:83] releasing machines lock for "false-783000", held for 2.511223709s
	W1003 20:53:21.368484    5052 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:21.376030    5052 out.go:201] 
	W1003 20:53:21.381197    5052 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:53:21.381263    5052 out.go:270] * 
	* 
	W1003 20:53:21.382881    5052 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:53:21.391992    5052 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.724037s)

                                                
                                                
-- stdout --
	* [enable-default-cni-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-783000" primary control-plane node in "enable-default-cni-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:53:23.671846    5165 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:53:23.671992    5165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:23.671996    5165 out.go:358] Setting ErrFile to fd 2...
	I1003 20:53:23.671998    5165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:23.672128    5165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:53:23.673270    5165 out.go:352] Setting JSON to false
	I1003 20:53:23.691311    5165 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4974,"bootTime":1728009029,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:53:23.691384    5165 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:53:23.695781    5165 out.go:177] * [enable-default-cni-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:53:23.703765    5165 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:53:23.703856    5165 notify.go:220] Checking for updates...
	I1003 20:53:23.710658    5165 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:53:23.713729    5165 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:53:23.716704    5165 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:53:23.719766    5165 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:53:23.722729    5165 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:53:23.726087    5165 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:53:23.726163    5165 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:53:23.726211    5165 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:53:23.730678    5165 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:53:23.737723    5165 start.go:297] selected driver: qemu2
	I1003 20:53:23.737729    5165 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:53:23.737736    5165 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:53:23.740129    5165 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:53:23.742642    5165 out.go:177] * Automatically selected the socket_vmnet network
	E1003 20:53:23.745778    5165 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1003 20:53:23.745800    5165 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:53:23.745820    5165 cni.go:84] Creating CNI manager for "bridge"
	I1003 20:53:23.745824    5165 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:53:23.745862    5165 start.go:340] cluster config:
	{Name:enable-default-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt
/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:53:23.750367    5165 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:53:23.757705    5165 out.go:177] * Starting "enable-default-cni-783000" primary control-plane node in "enable-default-cni-783000" cluster
	I1003 20:53:23.761725    5165 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:53:23.761741    5165 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:53:23.761750    5165 cache.go:56] Caching tarball of preloaded images
	I1003 20:53:23.761823    5165 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:53:23.761829    5165 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:53:23.761907    5165 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/enable-default-cni-783000/config.json ...
	I1003 20:53:23.761916    5165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/enable-default-cni-783000/config.json: {Name:mk0d4d73cc3230e8d94c2b9f0b11eae834ef3384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:53:23.762229    5165 start.go:360] acquireMachinesLock for enable-default-cni-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:53:23.762275    5165 start.go:364] duration metric: took 36.5µs to acquireMachinesLock for "enable-default-cni-783000"
	I1003 20:53:23.762286    5165 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:53:23.762316    5165 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:53:23.766655    5165 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:53:23.782141    5165 start.go:159] libmachine.API.Create for "enable-default-cni-783000" (driver="qemu2")
	I1003 20:53:23.782164    5165 client.go:168] LocalClient.Create starting
	I1003 20:53:23.782236    5165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:53:23.782273    5165 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:23.782285    5165 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:23.782329    5165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:53:23.782356    5165 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:23.782365    5165 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:23.782781    5165 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:53:23.921646    5165 main.go:141] libmachine: Creating SSH key...
	I1003 20:53:23.957230    5165 main.go:141] libmachine: Creating Disk image...
	I1003 20:53:23.957236    5165 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:53:23.957440    5165 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I1003 20:53:23.967363    5165 main.go:141] libmachine: STDOUT: 
	I1003 20:53:23.967399    5165 main.go:141] libmachine: STDERR: 
	I1003 20:53:23.967459    5165 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2 +20000M
	I1003 20:53:23.975858    5165 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:53:23.975873    5165 main.go:141] libmachine: STDERR: 
	I1003 20:53:23.975892    5165 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I1003 20:53:23.975899    5165 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:53:23.975910    5165 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:53:23.975944    5165 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:7f:7e:09:f5:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I1003 20:53:23.977791    5165 main.go:141] libmachine: STDOUT: 
	I1003 20:53:23.977804    5165 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:53:23.977824    5165 client.go:171] duration metric: took 195.654625ms to LocalClient.Create
	I1003 20:53:25.980044    5165 start.go:128] duration metric: took 2.217700625s to createHost
	I1003 20:53:25.980197    5165 start.go:83] releasing machines lock for "enable-default-cni-783000", held for 2.21787175s
	W1003 20:53:25.980279    5165 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:25.991616    5165 out.go:177] * Deleting "enable-default-cni-783000" in qemu2 ...
	W1003 20:53:26.014816    5165 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:26.014844    5165 start.go:729] Will try again in 5 seconds ...
	I1003 20:53:31.016979    5165 start.go:360] acquireMachinesLock for enable-default-cni-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:53:31.017347    5165 start.go:364] duration metric: took 260.625µs to acquireMachinesLock for "enable-default-cni-783000"
	I1003 20:53:31.017414    5165 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:53:31.017566    5165 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:53:31.029050    5165 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:53:31.063091    5165 start.go:159] libmachine.API.Create for "enable-default-cni-783000" (driver="qemu2")
	I1003 20:53:31.063131    5165 client.go:168] LocalClient.Create starting
	I1003 20:53:31.063240    5165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:53:31.063314    5165 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:31.063329    5165 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:31.063379    5165 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:53:31.063427    5165 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:31.063438    5165 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:31.063989    5165 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:53:31.209444    5165 main.go:141] libmachine: Creating SSH key...
	I1003 20:53:31.305426    5165 main.go:141] libmachine: Creating Disk image...
	I1003 20:53:31.305433    5165 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:53:31.305667    5165 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I1003 20:53:31.316092    5165 main.go:141] libmachine: STDOUT: 
	I1003 20:53:31.316117    5165 main.go:141] libmachine: STDERR: 
	I1003 20:53:31.316178    5165 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2 +20000M
	I1003 20:53:31.325010    5165 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:53:31.325040    5165 main.go:141] libmachine: STDERR: 
	I1003 20:53:31.325051    5165 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I1003 20:53:31.325056    5165 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:53:31.325067    5165 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:53:31.325098    5165 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:21:81:e9:03:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/enable-default-cni-783000/disk.qcow2
	I1003 20:53:31.327046    5165 main.go:141] libmachine: STDOUT: 
	I1003 20:53:31.327061    5165 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:53:31.327076    5165 client.go:171] duration metric: took 263.939084ms to LocalClient.Create
	I1003 20:53:33.329277    5165 start.go:128] duration metric: took 2.311678125s to createHost
	I1003 20:53:33.329386    5165 start.go:83] releasing machines lock for "enable-default-cni-783000", held for 2.31201225s
	W1003 20:53:33.329775    5165 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:33.339402    5165 out.go:201] 
	W1003 20:53:33.344450    5165 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:53:33.344471    5165 out.go:270] * 
	* 
	W1003 20:53:33.346432    5165 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:53:33.357281    5165 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.719151666s)

                                                
                                                
-- stdout --
	* [flannel-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-783000" primary control-plane node in "flannel-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:53:35.637720    5274 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:53:35.637880    5274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:35.637884    5274 out.go:358] Setting ErrFile to fd 2...
	I1003 20:53:35.637886    5274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:35.638001    5274 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:53:35.639175    5274 out.go:352] Setting JSON to false
	I1003 20:53:35.657021    5274 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4986,"bootTime":1728009029,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:53:35.657104    5274 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:53:35.662080    5274 out.go:177] * [flannel-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:53:35.669334    5274 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:53:35.669402    5274 notify.go:220] Checking for updates...
	I1003 20:53:35.680196    5274 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:53:35.683265    5274 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:53:35.684357    5274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:53:35.687220    5274 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:53:35.690221    5274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:53:35.693570    5274 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:53:35.693639    5274 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:53:35.693685    5274 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:53:35.698184    5274 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:53:35.705237    5274 start.go:297] selected driver: qemu2
	I1003 20:53:35.705242    5274 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:53:35.705247    5274 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:53:35.707598    5274 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:53:35.711211    5274 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:53:35.714345    5274 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:53:35.714366    5274 cni.go:84] Creating CNI manager for "flannel"
	I1003 20:53:35.714369    5274 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1003 20:53:35.714407    5274 start.go:340] cluster config:
	{Name:flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/soc
ket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:53:35.718510    5274 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:53:35.726185    5274 out.go:177] * Starting "flannel-783000" primary control-plane node in "flannel-783000" cluster
	I1003 20:53:35.730152    5274 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:53:35.730164    5274 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:53:35.730171    5274 cache.go:56] Caching tarball of preloaded images
	I1003 20:53:35.730242    5274 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:53:35.730247    5274 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:53:35.730298    5274 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/flannel-783000/config.json ...
	I1003 20:53:35.730307    5274 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/flannel-783000/config.json: {Name:mk2a068fed972ce69f9e7fee8e068def6602013b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:53:35.730655    5274 start.go:360] acquireMachinesLock for flannel-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:53:35.730701    5274 start.go:364] duration metric: took 40.042µs to acquireMachinesLock for "flannel-783000"
	I1003 20:53:35.730713    5274 start.go:93] Provisioning new machine with config: &{Name:flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:53:35.730747    5274 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:53:35.739239    5274 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:53:35.753810    5274 start.go:159] libmachine.API.Create for "flannel-783000" (driver="qemu2")
	I1003 20:53:35.753835    5274 client.go:168] LocalClient.Create starting
	I1003 20:53:35.753901    5274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:53:35.753938    5274 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:35.753951    5274 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:35.753998    5274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:53:35.754027    5274 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:35.754037    5274 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:35.754396    5274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:53:35.891977    5274 main.go:141] libmachine: Creating SSH key...
	I1003 20:53:35.942537    5274 main.go:141] libmachine: Creating Disk image...
	I1003 20:53:35.942544    5274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:53:35.942763    5274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2
	I1003 20:53:35.952569    5274 main.go:141] libmachine: STDOUT: 
	I1003 20:53:35.952585    5274 main.go:141] libmachine: STDERR: 
	I1003 20:53:35.952642    5274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2 +20000M
	I1003 20:53:35.961606    5274 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:53:35.961625    5274 main.go:141] libmachine: STDERR: 
	I1003 20:53:35.961643    5274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2
	I1003 20:53:35.961649    5274 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:53:35.961661    5274 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:53:35.961699    5274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:7a:f4:c2:6c:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2
	I1003 20:53:35.963738    5274 main.go:141] libmachine: STDOUT: 
	I1003 20:53:35.963752    5274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:53:35.963772    5274 client.go:171] duration metric: took 209.932083ms to LocalClient.Create
	I1003 20:53:37.965951    5274 start.go:128] duration metric: took 2.235183166s to createHost
	I1003 20:53:37.966026    5274 start.go:83] releasing machines lock for "flannel-783000", held for 2.235316959s
	W1003 20:53:37.966077    5274 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:37.978779    5274 out.go:177] * Deleting "flannel-783000" in qemu2 ...
	W1003 20:53:37.999519    5274 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:37.999549    5274 start.go:729] Will try again in 5 seconds ...
	I1003 20:53:43.000422    5274 start.go:360] acquireMachinesLock for flannel-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:53:43.000697    5274 start.go:364] duration metric: took 230.5µs to acquireMachinesLock for "flannel-783000"
	I1003 20:53:43.000737    5274 start.go:93] Provisioning new machine with config: &{Name:flannel-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:flannel-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:53:43.000830    5274 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:53:43.009098    5274 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:53:43.026690    5274 start.go:159] libmachine.API.Create for "flannel-783000" (driver="qemu2")
	I1003 20:53:43.026721    5274 client.go:168] LocalClient.Create starting
	I1003 20:53:43.026786    5274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:53:43.026837    5274 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:43.026846    5274 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:43.026886    5274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:53:43.026915    5274 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:43.026921    5274 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:43.027277    5274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:53:43.167249    5274 main.go:141] libmachine: Creating SSH key...
	I1003 20:53:43.266991    5274 main.go:141] libmachine: Creating Disk image...
	I1003 20:53:43.267001    5274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:53:43.267200    5274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2
	I1003 20:53:43.277157    5274 main.go:141] libmachine: STDOUT: 
	I1003 20:53:43.277198    5274 main.go:141] libmachine: STDERR: 
	I1003 20:53:43.277269    5274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2 +20000M
	I1003 20:53:43.285783    5274 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:53:43.285808    5274 main.go:141] libmachine: STDERR: 
	I1003 20:53:43.285825    5274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2
	I1003 20:53:43.285832    5274 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:53:43.285842    5274 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:53:43.285867    5274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:d7:12:23:d3:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/flannel-783000/disk.qcow2
	I1003 20:53:43.287794    5274 main.go:141] libmachine: STDOUT: 
	I1003 20:53:43.287808    5274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:53:43.287821    5274 client.go:171] duration metric: took 261.094208ms to LocalClient.Create
	I1003 20:53:45.290031    5274 start.go:128] duration metric: took 2.289166458s to createHost
	I1003 20:53:45.290135    5274 start.go:83] releasing machines lock for "flannel-783000", held for 2.289425583s
	W1003 20:53:45.290508    5274 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:45.299229    5274 out.go:201] 
	W1003 20:53:45.304310    5274 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:53:45.304358    5274 out.go:270] * 
	* 
	W1003 20:53:45.306742    5274 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:53:45.314047    5274 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.89920075s)

                                                
                                                
-- stdout --
	* [bridge-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-783000" primary control-plane node in "bridge-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:53:47.781393    5391 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:53:47.781559    5391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:47.781563    5391 out.go:358] Setting ErrFile to fd 2...
	I1003 20:53:47.781565    5391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:47.781706    5391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:53:47.782871    5391 out.go:352] Setting JSON to false
	I1003 20:53:47.801136    5391 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4998,"bootTime":1728009029,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:53:47.801204    5391 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:53:47.807333    5391 out.go:177] * [bridge-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:53:47.815327    5391 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:53:47.815413    5391 notify.go:220] Checking for updates...
	I1003 20:53:47.822398    5391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:53:47.825289    5391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:53:47.828364    5391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:53:47.831391    5391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:53:47.834413    5391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:53:47.837709    5391 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:53:47.837781    5391 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:53:47.837830    5391 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:53:47.842393    5391 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:53:47.849335    5391 start.go:297] selected driver: qemu2
	I1003 20:53:47.849342    5391 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:53:47.849348    5391 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:53:47.851814    5391 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:53:47.855359    5391 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:53:47.856758    5391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:53:47.856780    5391 cni.go:84] Creating CNI manager for "bridge"
	I1003 20:53:47.856789    5391 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:53:47.856841    5391 start.go:340] cluster config:
	{Name:bridge-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:53:47.861621    5391 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:53:47.869344    5391 out.go:177] * Starting "bridge-783000" primary control-plane node in "bridge-783000" cluster
	I1003 20:53:47.873354    5391 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:53:47.873371    5391 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:53:47.873380    5391 cache.go:56] Caching tarball of preloaded images
	I1003 20:53:47.873457    5391 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:53:47.873463    5391 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:53:47.873540    5391 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/bridge-783000/config.json ...
	I1003 20:53:47.873551    5391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/bridge-783000/config.json: {Name:mk4f76c801b22cc9bf79d47541aebf28ef506fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:53:47.873900    5391 start.go:360] acquireMachinesLock for bridge-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:53:47.873948    5391 start.go:364] duration metric: took 42.291µs to acquireMachinesLock for "bridge-783000"
	I1003 20:53:47.873959    5391 start.go:93] Provisioning new machine with config: &{Name:bridge-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:bridge-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:53:47.873984    5391 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:53:47.878421    5391 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:53:47.895250    5391 start.go:159] libmachine.API.Create for "bridge-783000" (driver="qemu2")
	I1003 20:53:47.895274    5391 client.go:168] LocalClient.Create starting
	I1003 20:53:47.895340    5391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:53:47.895376    5391 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:47.895389    5391 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:47.895435    5391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:53:47.895463    5391 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:47.895469    5391 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:47.895826    5391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:53:48.033274    5391 main.go:141] libmachine: Creating SSH key...
	I1003 20:53:48.197048    5391 main.go:141] libmachine: Creating Disk image...
	I1003 20:53:48.197059    5391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:53:48.197280    5391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2
	I1003 20:53:48.207773    5391 main.go:141] libmachine: STDOUT: 
	I1003 20:53:48.207792    5391 main.go:141] libmachine: STDERR: 
	I1003 20:53:48.207847    5391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2 +20000M
	I1003 20:53:48.216426    5391 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:53:48.216442    5391 main.go:141] libmachine: STDERR: 
	I1003 20:53:48.216466    5391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2
	I1003 20:53:48.216470    5391 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:53:48.216485    5391 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:53:48.216519    5391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:3b:3d:1f:19:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2
	I1003 20:53:48.218395    5391 main.go:141] libmachine: STDOUT: 
	I1003 20:53:48.218408    5391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:53:48.218426    5391 client.go:171] duration metric: took 323.1475ms to LocalClient.Create
	I1003 20:53:50.220657    5391 start.go:128] duration metric: took 2.346643583s to createHost
	I1003 20:53:50.220757    5391 start.go:83] releasing machines lock for "bridge-783000", held for 2.346783584s
	W1003 20:53:50.220793    5391 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:50.230100    5391 out.go:177] * Deleting "bridge-783000" in qemu2 ...
	W1003 20:53:50.249882    5391 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:50.249918    5391 start.go:729] Will try again in 5 seconds ...
	I1003 20:53:55.252186    5391 start.go:360] acquireMachinesLock for bridge-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:53:55.252785    5391 start.go:364] duration metric: took 494.833µs to acquireMachinesLock for "bridge-783000"
	I1003 20:53:55.252852    5391 start.go:93] Provisioning new machine with config: &{Name:bridge-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:bridge-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:53:55.253149    5391 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:53:55.263993    5391 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:53:55.313157    5391 start.go:159] libmachine.API.Create for "bridge-783000" (driver="qemu2")
	I1003 20:53:55.313220    5391 client.go:168] LocalClient.Create starting
	I1003 20:53:55.313378    5391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:53:55.313459    5391 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:55.313477    5391 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:55.313543    5391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:53:55.313603    5391 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:55.313621    5391 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:55.314238    5391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:53:55.465042    5391 main.go:141] libmachine: Creating SSH key...
	I1003 20:53:55.588431    5391 main.go:141] libmachine: Creating Disk image...
	I1003 20:53:55.588440    5391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:53:55.588666    5391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2
	I1003 20:53:55.598813    5391 main.go:141] libmachine: STDOUT: 
	I1003 20:53:55.598833    5391 main.go:141] libmachine: STDERR: 
	I1003 20:53:55.598924    5391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2 +20000M
	I1003 20:53:55.607952    5391 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:53:55.607972    5391 main.go:141] libmachine: STDERR: 
	I1003 20:53:55.607995    5391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2
	I1003 20:53:55.608001    5391 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:53:55.608010    5391 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:53:55.608039    5391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:2c:65:05:f0:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/bridge-783000/disk.qcow2
	I1003 20:53:55.610242    5391 main.go:141] libmachine: STDOUT: 
	I1003 20:53:55.610259    5391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:53:55.610282    5391 client.go:171] duration metric: took 297.054917ms to LocalClient.Create
	I1003 20:53:57.610981    5391 start.go:128] duration metric: took 2.357813459s to createHost
	I1003 20:53:57.611011    5391 start.go:83] releasing machines lock for "bridge-783000", held for 2.358207084s
	W1003 20:53:57.611145    5391 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:53:57.626144    5391 out.go:201] 
	W1003 20:53:57.629159    5391 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:53:57.629171    5391 out.go:270] * 
	* 
	W1003 20:53:57.630249    5391 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:53:57.641179    5391 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-783000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.813412041s)

                                                
                                                
-- stdout --
	* [kubenet-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-783000" primary control-plane node in "kubenet-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:53:59.868499    5507 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:53:59.868666    5507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:59.868673    5507 out.go:358] Setting ErrFile to fd 2...
	I1003 20:53:59.868675    5507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:53:59.868810    5507 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:53:59.869970    5507 out.go:352] Setting JSON to false
	I1003 20:53:59.887825    5507 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5010,"bootTime":1728009029,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:53:59.887897    5507 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:53:59.893825    5507 out.go:177] * [kubenet-783000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:53:59.897721    5507 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:53:59.897808    5507 notify.go:220] Checking for updates...
	I1003 20:53:59.905717    5507 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:53:59.908728    5507 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:53:59.911731    5507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:53:59.914814    5507 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:53:59.917769    5507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:53:59.921016    5507 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:53:59.921091    5507 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:53:59.921144    5507 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:53:59.925734    5507 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:53:59.932690    5507 start.go:297] selected driver: qemu2
	I1003 20:53:59.932696    5507 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:53:59.932702    5507 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:53:59.935092    5507 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:53:59.937718    5507 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:53:59.940830    5507 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:53:59.940849    5507 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1003 20:53:59.940892    5507 start.go:340] cluster config:
	{Name:kubenet-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:53:59.945494    5507 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:53:59.953735    5507 out.go:177] * Starting "kubenet-783000" primary control-plane node in "kubenet-783000" cluster
	I1003 20:53:59.956734    5507 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:53:59.956752    5507 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:53:59.956761    5507 cache.go:56] Caching tarball of preloaded images
	I1003 20:53:59.956838    5507 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:53:59.956844    5507 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:53:59.956926    5507 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/kubenet-783000/config.json ...
	I1003 20:53:59.956937    5507 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/kubenet-783000/config.json: {Name:mk51b711d0e2390b18a771a0a82477a8b291ffd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:53:59.957271    5507 start.go:360] acquireMachinesLock for kubenet-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:53:59.957325    5507 start.go:364] duration metric: took 48.125µs to acquireMachinesLock for "kubenet-783000"
	I1003 20:53:59.957340    5507 start.go:93] Provisioning new machine with config: &{Name:kubenet-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kubenet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:53:59.957376    5507 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:53:59.965555    5507 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:53:59.982951    5507 start.go:159] libmachine.API.Create for "kubenet-783000" (driver="qemu2")
	I1003 20:53:59.982993    5507 client.go:168] LocalClient.Create starting
	I1003 20:53:59.983061    5507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:53:59.983106    5507 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:59.983120    5507 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:59.983171    5507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:53:59.983200    5507 main.go:141] libmachine: Decoding PEM data...
	I1003 20:53:59.983207    5507 main.go:141] libmachine: Parsing certificate...
	I1003 20:53:59.983683    5507 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:54:00.122003    5507 main.go:141] libmachine: Creating SSH key...
	I1003 20:54:00.199757    5507 main.go:141] libmachine: Creating Disk image...
	I1003 20:54:00.199765    5507 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:54:00.199978    5507 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2
	I1003 20:54:00.209972    5507 main.go:141] libmachine: STDOUT: 
	I1003 20:54:00.209998    5507 main.go:141] libmachine: STDERR: 
	I1003 20:54:00.210062    5507 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2 +20000M
	I1003 20:54:00.218789    5507 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:54:00.218804    5507 main.go:141] libmachine: STDERR: 
	I1003 20:54:00.218834    5507 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2
	I1003 20:54:00.218840    5507 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:54:00.218853    5507 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:00.218879    5507 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c0:9f:3d:04:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2
	I1003 20:54:00.220776    5507 main.go:141] libmachine: STDOUT: 
	I1003 20:54:00.220791    5507 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:00.220814    5507 client.go:171] duration metric: took 237.8145ms to LocalClient.Create
	I1003 20:54:02.222961    5507 start.go:128] duration metric: took 2.265565708s to createHost
	I1003 20:54:02.223017    5507 start.go:83] releasing machines lock for "kubenet-783000", held for 2.265684125s
	W1003 20:54:02.223074    5507 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:02.236230    5507 out.go:177] * Deleting "kubenet-783000" in qemu2 ...
	W1003 20:54:02.253763    5507 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:02.253775    5507 start.go:729] Will try again in 5 seconds ...
	I1003 20:54:07.256055    5507 start.go:360] acquireMachinesLock for kubenet-783000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:07.256777    5507 start.go:364] duration metric: took 601.541µs to acquireMachinesLock for "kubenet-783000"
	I1003 20:54:07.256928    5507 start.go:93] Provisioning new machine with config: &{Name:kubenet-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kubenet-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:54:07.257197    5507 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:54:07.266784    5507 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1003 20:54:07.317412    5507 start.go:159] libmachine.API.Create for "kubenet-783000" (driver="qemu2")
	I1003 20:54:07.317502    5507 client.go:168] LocalClient.Create starting
	I1003 20:54:07.317662    5507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:54:07.317751    5507 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:07.317767    5507 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:07.317843    5507 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:54:07.317902    5507 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:07.317917    5507 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:07.318655    5507 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:54:07.468602    5507 main.go:141] libmachine: Creating SSH key...
	I1003 20:54:07.598393    5507 main.go:141] libmachine: Creating Disk image...
	I1003 20:54:07.598406    5507 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:54:07.598641    5507 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2
	I1003 20:54:07.609151    5507 main.go:141] libmachine: STDOUT: 
	I1003 20:54:07.609171    5507 main.go:141] libmachine: STDERR: 
	I1003 20:54:07.609238    5507 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2 +20000M
	I1003 20:54:07.617843    5507 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:54:07.617859    5507 main.go:141] libmachine: STDERR: 
	I1003 20:54:07.617872    5507 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2
	I1003 20:54:07.617877    5507 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:54:07.617888    5507 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:07.617922    5507 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:8f:f9:16:ec:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/kubenet-783000/disk.qcow2
	I1003 20:54:07.619781    5507 main.go:141] libmachine: STDOUT: 
	I1003 20:54:07.619800    5507 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:07.619824    5507 client.go:171] duration metric: took 302.313583ms to LocalClient.Create
	I1003 20:54:09.621949    5507 start.go:128] duration metric: took 2.36473525s to createHost
	I1003 20:54:09.621992    5507 start.go:83] releasing machines lock for "kubenet-783000", held for 2.365189375s
	W1003 20:54:09.622100    5507 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:09.629493    5507 out.go:201] 
	W1003 20:54:09.633375    5507 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:09.633381    5507 out.go:270] * 
	* 
	W1003 20:54:09.634022    5507 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:54:09.645433    5507 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.77040575s)

                                                
                                                
-- stdout --
	* [old-k8s-version-789000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-789000" primary control-plane node in "old-k8s-version-789000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-789000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:11.856723    5620 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:11.856896    5620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:11.856899    5620 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:11.856902    5620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:11.857032    5620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:11.858210    5620 out.go:352] Setting JSON to false
	I1003 20:54:11.876082    5620 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5022,"bootTime":1728009029,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:54:11.876148    5620 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:54:11.881837    5620 out.go:177] * [old-k8s-version-789000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:54:11.888917    5620 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:54:11.888974    5620 notify.go:220] Checking for updates...
	I1003 20:54:11.895914    5620 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:54:11.898803    5620 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:54:11.901963    5620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:54:11.904901    5620 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:54:11.906223    5620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:54:11.909234    5620 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:11.909306    5620 config.go:182] Loaded profile config "stopped-upgrade-455000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1003 20:54:11.909344    5620 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:54:11.913874    5620 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:54:11.918912    5620 start.go:297] selected driver: qemu2
	I1003 20:54:11.918919    5620 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:54:11.918926    5620 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:54:11.921344    5620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:54:11.924894    5620 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:54:11.927959    5620 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:54:11.927976    5620 cni.go:84] Creating CNI manager for ""
	I1003 20:54:11.927996    5620 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 20:54:11.928027    5620 start.go:340] cluster config:
	{Name:old-k8s-version-789000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin
/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:11.932473    5620 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:11.940936    5620 out.go:177] * Starting "old-k8s-version-789000" primary control-plane node in "old-k8s-version-789000" cluster
	I1003 20:54:11.944793    5620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1003 20:54:11.944809    5620 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1003 20:54:11.944820    5620 cache.go:56] Caching tarball of preloaded images
	I1003 20:54:11.944897    5620 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:54:11.944902    5620 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1003 20:54:11.944971    5620 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/old-k8s-version-789000/config.json ...
	I1003 20:54:11.944987    5620 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/old-k8s-version-789000/config.json: {Name:mk88be31c980ef10cf22b578b4dbd122809be286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:54:11.945304    5620 start.go:360] acquireMachinesLock for old-k8s-version-789000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:11.945354    5620 start.go:364] duration metric: took 40µs to acquireMachinesLock for "old-k8s-version-789000"
	I1003 20:54:11.945365    5620 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:54:11.945393    5620 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:54:11.949891    5620 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:54:11.965876    5620 start.go:159] libmachine.API.Create for "old-k8s-version-789000" (driver="qemu2")
	I1003 20:54:11.965901    5620 client.go:168] LocalClient.Create starting
	I1003 20:54:11.965963    5620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:54:11.966000    5620 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:11.966009    5620 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:11.966048    5620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:54:11.966079    5620 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:11.966087    5620 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:11.966427    5620 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:54:12.104088    5620 main.go:141] libmachine: Creating SSH key...
	I1003 20:54:12.243128    5620 main.go:141] libmachine: Creating Disk image...
	I1003 20:54:12.243136    5620 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:54:12.243359    5620 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I1003 20:54:12.253378    5620 main.go:141] libmachine: STDOUT: 
	I1003 20:54:12.253397    5620 main.go:141] libmachine: STDERR: 
	I1003 20:54:12.253456    5620 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2 +20000M
	I1003 20:54:12.262181    5620 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:54:12.262203    5620 main.go:141] libmachine: STDERR: 
	I1003 20:54:12.262223    5620 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I1003 20:54:12.262229    5620 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:54:12.262241    5620 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:12.262265    5620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:10:d2:b2:f3:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I1003 20:54:12.264151    5620 main.go:141] libmachine: STDOUT: 
	I1003 20:54:12.264166    5620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:12.264186    5620 client.go:171] duration metric: took 298.278875ms to LocalClient.Create
	I1003 20:54:14.266402    5620 start.go:128] duration metric: took 2.320956s to createHost
	I1003 20:54:14.266504    5620 start.go:83] releasing machines lock for "old-k8s-version-789000", held for 2.321138083s
	W1003 20:54:14.266550    5620 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:14.271190    5620 out.go:177] * Deleting "old-k8s-version-789000" in qemu2 ...
	W1003 20:54:14.298461    5620 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:14.298490    5620 start.go:729] Will try again in 5 seconds ...
	I1003 20:54:19.300661    5620 start.go:360] acquireMachinesLock for old-k8s-version-789000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:19.300947    5620 start.go:364] duration metric: took 238.417µs to acquireMachinesLock for "old-k8s-version-789000"
	I1003 20:54:19.301012    5620 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:54:19.301141    5620 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:54:19.310440    5620 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:54:19.340230    5620 start.go:159] libmachine.API.Create for "old-k8s-version-789000" (driver="qemu2")
	I1003 20:54:19.340271    5620 client.go:168] LocalClient.Create starting
	I1003 20:54:19.340384    5620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:54:19.340448    5620 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:19.340462    5620 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:19.340509    5620 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:54:19.340553    5620 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:19.340565    5620 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:19.341160    5620 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:54:19.484891    5620 main.go:141] libmachine: Creating SSH key...
	I1003 20:54:19.532009    5620 main.go:141] libmachine: Creating Disk image...
	I1003 20:54:19.532017    5620 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:54:19.532230    5620 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I1003 20:54:19.542880    5620 main.go:141] libmachine: STDOUT: 
	I1003 20:54:19.542908    5620 main.go:141] libmachine: STDERR: 
	I1003 20:54:19.542969    5620 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2 +20000M
	I1003 20:54:19.551913    5620 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:54:19.551930    5620 main.go:141] libmachine: STDERR: 
	I1003 20:54:19.551944    5620 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I1003 20:54:19.551950    5620 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:54:19.551959    5620 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:19.551985    5620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:3c:41:ad:aa:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I1003 20:54:19.553961    5620 main.go:141] libmachine: STDOUT: 
	I1003 20:54:19.553976    5620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:19.553990    5620 client.go:171] duration metric: took 213.713708ms to LocalClient.Create
	I1003 20:54:21.556177    5620 start.go:128] duration metric: took 2.255007417s to createHost
	I1003 20:54:21.556251    5620 start.go:83] releasing machines lock for "old-k8s-version-789000", held for 2.255290208s
	W1003 20:54:21.556591    5620 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-789000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-789000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:21.567305    5620 out.go:201] 
	W1003 20:54:21.571451    5620 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:21.571472    5620 out.go:270] * 
	* 
	W1003 20:54:21.573603    5620 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:54:21.584152    5620 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (59.952084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-789000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-789000 create -f testdata/busybox.yaml: exit status 1 (30.006125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-789000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-789000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (30.332291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (31.26025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-789000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-789000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-789000 describe deploy/metrics-server -n kube-system: exit status 1 (27.299125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-789000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-789000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (30.992208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-431000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-431000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.945241666s)

                                                
                                                
-- stdout --
	* [no-preload-431000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-431000" primary control-plane node in "no-preload-431000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-431000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:24.881428    5669 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:24.881570    5669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:24.881574    5669 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:24.881576    5669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:24.881722    5669 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:24.882825    5669 out.go:352] Setting JSON to false
	I1003 20:54:24.900718    5669 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5035,"bootTime":1728009029,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:54:24.900788    5669 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:54:24.902655    5669 out.go:177] * [no-preload-431000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:54:24.909365    5669 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:54:24.909468    5669 notify.go:220] Checking for updates...
	I1003 20:54:24.915348    5669 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:54:24.918387    5669 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:54:24.919678    5669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:54:24.922338    5669 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:54:24.925358    5669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:54:24.928729    5669 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:24.928806    5669 config.go:182] Loaded profile config "old-k8s-version-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1003 20:54:24.928853    5669 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:54:24.933298    5669 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:54:24.940331    5669 start.go:297] selected driver: qemu2
	I1003 20:54:24.940338    5669 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:54:24.940344    5669 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:54:24.942771    5669 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:54:24.946355    5669 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:54:24.949465    5669 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:54:24.949501    5669 cni.go:84] Creating CNI manager for ""
	I1003 20:54:24.949523    5669 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:54:24.949532    5669 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:54:24.949573    5669 start.go:340] cluster config:
	{Name:no-preload-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:24.954197    5669 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:24.962380    5669 out.go:177] * Starting "no-preload-431000" primary control-plane node in "no-preload-431000" cluster
	I1003 20:54:24.966428    5669 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:54:24.966526    5669 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/no-preload-431000/config.json ...
	I1003 20:54:24.966549    5669 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/no-preload-431000/config.json: {Name:mk0e39d4cea244b77caa9753bd325eb4b1886e35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:54:24.966560    5669 cache.go:107] acquiring lock: {Name:mk4ffe7ca6ed0a1363244dc2b9236fd0b2364712 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:24.966564    5669 cache.go:107] acquiring lock: {Name:mk0044a56e75d5a1ce088d8d746509abcaa87205 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:24.966569    5669 cache.go:107] acquiring lock: {Name:mkb710f64fd3f4280bfe4e6fea4d4943ae5a2a28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:24.966611    5669 cache.go:107] acquiring lock: {Name:mk2e7a86448524caa375962b0868b6b9fda7c511 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:24.966656    5669 cache.go:115] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1003 20:54:24.966664    5669 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.25µs
	I1003 20:54:24.966671    5669 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1003 20:54:24.966678    5669 cache.go:107] acquiring lock: {Name:mk43638c432634aba35109c63a94252e65bcb1ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:24.966694    5669 cache.go:107] acquiring lock: {Name:mk3d74714e12244ae2f7ce5ae4bfa811679ad7b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:24.966706    5669 cache.go:107] acquiring lock: {Name:mke20b6c1096837f109a9750b66e4b40935a5cba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:24.966708    5669 cache.go:107] acquiring lock: {Name:mk10d70e4ad422d71f32a5671445928876c76fb7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:24.967353    5669 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1003 20:54:24.967361    5669 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1003 20:54:24.967379    5669 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1003 20:54:24.967379    5669 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1003 20:54:24.967365    5669 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1003 20:54:24.967360    5669 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1003 20:54:24.967480    5669 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1003 20:54:24.967545    5669 start.go:360] acquireMachinesLock for no-preload-431000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:24.967608    5669 start.go:364] duration metric: took 57µs to acquireMachinesLock for "no-preload-431000"
	I1003 20:54:24.967625    5669 start.go:93] Provisioning new machine with config: &{Name:no-preload-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:no-preload-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:54:24.967663    5669 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:54:24.974331    5669 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:54:24.978055    5669 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1003 20:54:24.978729    5669 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1003 20:54:24.978753    5669 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1003 20:54:24.978759    5669 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1003 20:54:24.980543    5669 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1003 20:54:24.980934    5669 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1003 20:54:24.980917    5669 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1003 20:54:24.993009    5669 start.go:159] libmachine.API.Create for "no-preload-431000" (driver="qemu2")
	I1003 20:54:24.993027    5669 client.go:168] LocalClient.Create starting
	I1003 20:54:24.993131    5669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:54:24.993169    5669 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:24.993186    5669 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:24.993222    5669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:54:24.993253    5669 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:24.993261    5669 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:24.993607    5669 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:54:25.222704    5669 main.go:141] libmachine: Creating SSH key...
	I1003 20:54:25.392848    5669 main.go:141] libmachine: Creating Disk image...
	I1003 20:54:25.392870    5669 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:54:25.393098    5669 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2
	I1003 20:54:25.403128    5669 main.go:141] libmachine: STDOUT: 
	I1003 20:54:25.403154    5669 main.go:141] libmachine: STDERR: 
	I1003 20:54:25.403230    5669 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2 +20000M
	I1003 20:54:25.411801    5669 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:54:25.411818    5669 main.go:141] libmachine: STDERR: 
	I1003 20:54:25.411832    5669 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2
	I1003 20:54:25.411836    5669 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:54:25.411849    5669 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:25.411879    5669 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:d8:b6:44:07:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2
	I1003 20:54:25.413740    5669 main.go:141] libmachine: STDOUT: 
	I1003 20:54:25.413755    5669 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:25.413773    5669 client.go:171] duration metric: took 420.740875ms to LocalClient.Create
	I1003 20:54:26.891130    5669 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I1003 20:54:27.042513    5669 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I1003 20:54:27.045793    5669 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1003 20:54:27.157847    5669 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1003 20:54:27.413956    5669 start.go:128] duration metric: took 2.446268708s to createHost
	I1003 20:54:27.414004    5669 start.go:83] releasing machines lock for "no-preload-431000", held for 2.446383375s
	W1003 20:54:27.414059    5669 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:27.427620    5669 out.go:177] * Deleting "no-preload-431000" in qemu2 ...
	W1003 20:54:27.453851    5669 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:27.453876    5669 start.go:729] Will try again in 5 seconds ...
	I1003 20:54:27.543499    5669 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I1003 20:54:27.567713    5669 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1003 20:54:27.579287    5669 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1003 20:54:27.743624    5669 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1003 20:54:27.743684    5669 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 2.777010584s
	I1003 20:54:27.743711    5669 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1003 20:54:29.713494    5669 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1003 20:54:29.713559    5669 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.746996834s
	I1003 20:54:29.713589    5669 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1003 20:54:30.131668    5669 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1003 20:54:30.131729    5669 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 5.1650525s
	I1003 20:54:30.131764    5669 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1003 20:54:30.455992    5669 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1003 20:54:30.456045    5669 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 5.489357666s
	I1003 20:54:30.456078    5669 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1003 20:54:30.784804    5669 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1003 20:54:30.784850    5669 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 5.818165292s
	I1003 20:54:30.784875    5669 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1003 20:54:30.809271    5669 cache.go:157] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1003 20:54:30.809309    5669 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 5.842754541s
	I1003 20:54:30.809330    5669 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1003 20:54:32.459114    5669 start.go:360] acquireMachinesLock for no-preload-431000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:32.474187    5669 start.go:364] duration metric: took 15.010458ms to acquireMachinesLock for "no-preload-431000"
	I1003 20:54:32.474234    5669 start.go:93] Provisioning new machine with config: &{Name:no-preload-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:no-preload-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:54:32.474428    5669 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:54:32.483446    5669 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:54:32.531627    5669 start.go:159] libmachine.API.Create for "no-preload-431000" (driver="qemu2")
	I1003 20:54:32.531665    5669 client.go:168] LocalClient.Create starting
	I1003 20:54:32.531820    5669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:54:32.531903    5669 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:32.531924    5669 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:32.532015    5669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:54:32.532071    5669 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:32.532082    5669 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:32.532670    5669 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:54:32.685795    5669 main.go:141] libmachine: Creating SSH key...
	I1003 20:54:32.734577    5669 main.go:141] libmachine: Creating Disk image...
	I1003 20:54:32.734586    5669 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:54:32.734825    5669 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2
	I1003 20:54:32.745745    5669 main.go:141] libmachine: STDOUT: 
	I1003 20:54:32.745781    5669 main.go:141] libmachine: STDERR: 
	I1003 20:54:32.745855    5669 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2 +20000M
	I1003 20:54:32.755313    5669 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:54:32.755339    5669 main.go:141] libmachine: STDERR: 
	I1003 20:54:32.755354    5669 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2
	I1003 20:54:32.755360    5669 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:54:32.755368    5669 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:32.755401    5669 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:df:b3:63:50:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2
	I1003 20:54:32.757483    5669 main.go:141] libmachine: STDOUT: 
	I1003 20:54:32.757498    5669 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:32.757511    5669 client.go:171] duration metric: took 225.841041ms to LocalClient.Create
	I1003 20:54:34.757843    5669 start.go:128] duration metric: took 2.283384333s to createHost
	I1003 20:54:34.757920    5669 start.go:83] releasing machines lock for "no-preload-431000", held for 2.283708875s
	W1003 20:54:34.758235    5669 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:34.771943    5669 out.go:201] 
	W1003 20:54:34.775897    5669 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:34.775931    5669 out.go:270] * 
	* 
	W1003 20:54:34.778902    5669 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:54:34.785906    5669 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-431000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000: exit status 7 (54.8275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (7.29005375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-789000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-789000" primary control-plane node in "old-k8s-version-789000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-789000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-789000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:25.260091    5695 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:25.260265    5695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:25.260269    5695 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:25.260271    5695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:25.260408    5695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:25.261683    5695 out.go:352] Setting JSON to false
	I1003 20:54:25.280389    5695 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5036,"bootTime":1728009029,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:54:25.280479    5695 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:54:25.285324    5695 out.go:177] * [old-k8s-version-789000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:54:25.293399    5695 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:54:25.293468    5695 notify.go:220] Checking for updates...
	I1003 20:54:25.300323    5695 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:54:25.303376    5695 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:54:25.306298    5695 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:54:25.309324    5695 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:54:25.312383    5695 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:54:25.315576    5695 config.go:182] Loaded profile config "old-k8s-version-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1003 20:54:25.318317    5695 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1003 20:54:25.321429    5695 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:54:25.325310    5695 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:54:25.332363    5695 start.go:297] selected driver: qemu2
	I1003 20:54:25.332373    5695 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:25.332427    5695 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:54:25.334847    5695 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:54:25.334869    5695 cni.go:84] Creating CNI manager for ""
	I1003 20:54:25.334887    5695 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 20:54:25.334918    5695 start.go:340] cluster config:
	{Name:old-k8s-version-789000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-789000 Namespace:defaul
t APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:25.338928    5695 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:25.347306    5695 out.go:177] * Starting "old-k8s-version-789000" primary control-plane node in "old-k8s-version-789000" cluster
	I1003 20:54:25.351247    5695 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1003 20:54:25.351274    5695 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1003 20:54:25.351284    5695 cache.go:56] Caching tarball of preloaded images
	I1003 20:54:25.351369    5695 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:54:25.351374    5695 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1003 20:54:25.351448    5695 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/old-k8s-version-789000/config.json ...
	I1003 20:54:25.351799    5695 start.go:360] acquireMachinesLock for old-k8s-version-789000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:27.414161    5695 start.go:364] duration metric: took 2.062327541s to acquireMachinesLock for "old-k8s-version-789000"
	I1003 20:54:27.414256    5695 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:54:27.414291    5695 fix.go:54] fixHost starting: 
	I1003 20:54:27.414919    5695 fix.go:112] recreateIfNeeded on old-k8s-version-789000: state=Stopped err=<nil>
	W1003 20:54:27.414963    5695 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:54:27.420590    5695 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-789000" ...
	I1003 20:54:27.431416    5695 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:27.431648    5695 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:3c:41:ad:aa:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I1003 20:54:27.444886    5695 main.go:141] libmachine: STDOUT: 
	I1003 20:54:27.444961    5695 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:27.445084    5695 fix.go:56] duration metric: took 30.799667ms for fixHost
	I1003 20:54:27.445099    5695 start.go:83] releasing machines lock for "old-k8s-version-789000", held for 30.898959ms
	W1003 20:54:27.445128    5695 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:27.445295    5695 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:27.445311    5695 start.go:729] Will try again in 5 seconds ...
	I1003 20:54:32.446252    5695 start.go:360] acquireMachinesLock for old-k8s-version-789000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:32.446828    5695 start.go:364] duration metric: took 443.125µs to acquireMachinesLock for "old-k8s-version-789000"
	I1003 20:54:32.446941    5695 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:54:32.446961    5695 fix.go:54] fixHost starting: 
	I1003 20:54:32.447774    5695 fix.go:112] recreateIfNeeded on old-k8s-version-789000: state=Stopped err=<nil>
	W1003 20:54:32.447806    5695 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:54:32.458438    5695 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-789000" ...
	I1003 20:54:32.462388    5695 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:32.462660    5695 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:3c:41:ad:aa:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/old-k8s-version-789000/disk.qcow2
	I1003 20:54:32.473927    5695 main.go:141] libmachine: STDOUT: 
	I1003 20:54:32.473981    5695 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:32.474109    5695 fix.go:56] duration metric: took 27.14725ms for fixHost
	I1003 20:54:32.474124    5695 start.go:83] releasing machines lock for "old-k8s-version-789000", held for 27.274083ms
	W1003 20:54:32.474295    5695 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-789000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:32.487368    5695 out.go:201] 
	W1003 20:54:32.490516    5695 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:32.490544    5695 out.go:270] * 
	* 
	W1003 20:54:32.492576    5695 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:54:32.508923    5695 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-789000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (49.541958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (7.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-789000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (38.011334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-789000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (31.15825ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-789000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-789000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (36.515042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-789000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (31.1815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-789000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-789000 --alsologtostderr -v=1: exit status 83 (42.378792ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-789000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-789000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:32.781288    5735 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:32.781730    5735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:32.781734    5735 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:32.781736    5735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:32.781904    5735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:32.782131    5735 out.go:352] Setting JSON to false
	I1003 20:54:32.782142    5735 mustload.go:65] Loading cluster: old-k8s-version-789000
	I1003 20:54:32.782339    5735 config.go:182] Loaded profile config "old-k8s-version-789000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1003 20:54:32.787371    5735 out.go:177] * The control-plane node old-k8s-version-789000 host is not running: state=Stopped
	I1003 20:54:32.790366    5735 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-789000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-789000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (30.217375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (30.558083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-789000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-291000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-291000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.552975416s)

                                                
                                                
-- stdout --
	* [embed-certs-291000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-291000" primary control-plane node in "embed-certs-291000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-291000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:33.102692    5753 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:33.102844    5753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:33.102847    5753 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:33.102850    5753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:33.102977    5753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:33.104149    5753 out.go:352] Setting JSON to false
	I1003 20:54:33.121867    5753 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5044,"bootTime":1728009029,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:54:33.121932    5753 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:54:33.126318    5753 out.go:177] * [embed-certs-291000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:54:33.132453    5753 notify.go:220] Checking for updates...
	I1003 20:54:33.136307    5753 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:54:33.143342    5753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:54:33.147376    5753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:54:33.154379    5753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:54:33.162321    5753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:54:33.166224    5753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:54:33.169652    5753 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:33.169718    5753 config.go:182] Loaded profile config "no-preload-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:33.169769    5753 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:54:33.173327    5753 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:54:33.178285    5753 start.go:297] selected driver: qemu2
	I1003 20:54:33.178291    5753 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:54:33.178297    5753 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:54:33.180645    5753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:54:33.184372    5753 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:54:33.185856    5753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:54:33.185880    5753 cni.go:84] Creating CNI manager for ""
	I1003 20:54:33.185907    5753 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:54:33.185916    5753 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:54:33.185949    5753 start.go:340] cluster config:
	{Name:embed-certs-291000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-291000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:33.190450    5753 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:33.194345    5753 out.go:177] * Starting "embed-certs-291000" primary control-plane node in "embed-certs-291000" cluster
	I1003 20:54:33.202308    5753 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:54:33.202326    5753 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:54:33.202336    5753 cache.go:56] Caching tarball of preloaded images
	I1003 20:54:33.202416    5753 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:54:33.202422    5753 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:54:33.202487    5753 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/embed-certs-291000/config.json ...
	I1003 20:54:33.202501    5753 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/embed-certs-291000/config.json: {Name:mkd0df527793b50383adaa631fb591b8e6d0bf59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:54:33.202756    5753 start.go:360] acquireMachinesLock for embed-certs-291000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:34.758118    5753 start.go:364] duration metric: took 1.555295916s to acquireMachinesLock for "embed-certs-291000"
	I1003 20:54:34.758255    5753 start.go:93] Provisioning new machine with config: &{Name:embed-certs-291000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:embed-certs-291000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:54:34.758508    5753 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:54:34.767889    5753 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:54:34.817936    5753 start.go:159] libmachine.API.Create for "embed-certs-291000" (driver="qemu2")
	I1003 20:54:34.817979    5753 client.go:168] LocalClient.Create starting
	I1003 20:54:34.818103    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:54:34.818177    5753 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:34.818210    5753 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:34.818278    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:54:34.818338    5753 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:34.818353    5753 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:34.819037    5753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:54:34.970873    5753 main.go:141] libmachine: Creating SSH key...
	I1003 20:54:35.216081    5753 main.go:141] libmachine: Creating Disk image...
	I1003 20:54:35.216088    5753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:54:35.216285    5753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2
	I1003 20:54:35.226010    5753 main.go:141] libmachine: STDOUT: 
	I1003 20:54:35.226026    5753 main.go:141] libmachine: STDERR: 
	I1003 20:54:35.226075    5753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2 +20000M
	I1003 20:54:35.234461    5753 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:54:35.234485    5753 main.go:141] libmachine: STDERR: 
	I1003 20:54:35.234500    5753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2
	I1003 20:54:35.234506    5753 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:54:35.234519    5753 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:35.234549    5753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:11:b9:ad:1c:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2
	I1003 20:54:35.236293    5753 main.go:141] libmachine: STDOUT: 
	I1003 20:54:35.236307    5753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:35.236326    5753 client.go:171] duration metric: took 418.340667ms to LocalClient.Create
	I1003 20:54:37.238551    5753 start.go:128] duration metric: took 2.480011167s to createHost
	I1003 20:54:37.238628    5753 start.go:83] releasing machines lock for "embed-certs-291000", held for 2.480469541s
	W1003 20:54:37.238687    5753 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:37.248937    5753 out.go:177] * Deleting "embed-certs-291000" in qemu2 ...
	W1003 20:54:37.274119    5753 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:37.274151    5753 start.go:729] Will try again in 5 seconds ...
	I1003 20:54:42.276374    5753 start.go:360] acquireMachinesLock for embed-certs-291000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:42.276813    5753 start.go:364] duration metric: took 337.792µs to acquireMachinesLock for "embed-certs-291000"
	I1003 20:54:42.276917    5753 start.go:93] Provisioning new machine with config: &{Name:embed-certs-291000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:embed-certs-291000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:54:42.277247    5753 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:54:42.286863    5753 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:54:42.335845    5753 start.go:159] libmachine.API.Create for "embed-certs-291000" (driver="qemu2")
	I1003 20:54:42.335896    5753 client.go:168] LocalClient.Create starting
	I1003 20:54:42.336043    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:54:42.336127    5753 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:42.336143    5753 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:42.336211    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:54:42.336271    5753 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:42.336286    5753 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:42.336832    5753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:54:42.486215    5753 main.go:141] libmachine: Creating SSH key...
	I1003 20:54:42.556966    5753 main.go:141] libmachine: Creating Disk image...
	I1003 20:54:42.556974    5753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:54:42.557428    5753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2
	I1003 20:54:42.567438    5753 main.go:141] libmachine: STDOUT: 
	I1003 20:54:42.567454    5753 main.go:141] libmachine: STDERR: 
	I1003 20:54:42.567508    5753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2 +20000M
	I1003 20:54:42.576044    5753 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:54:42.576059    5753 main.go:141] libmachine: STDERR: 
	I1003 20:54:42.576069    5753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2
	I1003 20:54:42.576076    5753 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:54:42.576084    5753 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:42.576118    5753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:ad:71:6d:c4:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2
	I1003 20:54:42.577898    5753 main.go:141] libmachine: STDOUT: 
	I1003 20:54:42.577941    5753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:42.577958    5753 client.go:171] duration metric: took 242.057125ms to LocalClient.Create
	I1003 20:54:44.578361    5753 start.go:128] duration metric: took 2.301003833s to createHost
	I1003 20:54:44.578467    5753 start.go:83] releasing machines lock for "embed-certs-291000", held for 2.301631375s
	W1003 20:54:44.578858    5753 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-291000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-291000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:44.589432    5753 out.go:201] 
	W1003 20:54:44.600575    5753 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:44.600611    5753 out.go:270] * 
	* 
	W1003 20:54:44.603381    5753 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:54:44.611443    5753 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-291000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000: exit status 7 (68.964375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-431000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-431000 create -f testdata/busybox.yaml: exit status 1 (32.269416ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-431000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-431000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000: exit status 7 (35.837584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-431000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000: exit status 7 (36.026083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-431000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-431000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-431000 describe deploy/metrics-server -n kube-system: exit status 1 (28.513334ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-431000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-431000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000: exit status 7 (32.211791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-431000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-431000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.449695375s)

                                                
                                                
-- stdout --
	* [no-preload-431000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-431000" primary control-plane node in "no-preload-431000" cluster
	* Restarting existing qemu2 VM for "no-preload-431000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-431000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:39.241450    5799 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:39.241602    5799 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:39.241605    5799 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:39.241608    5799 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:39.241741    5799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:39.242799    5799 out.go:352] Setting JSON to false
	I1003 20:54:39.260445    5799 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5050,"bootTime":1728009029,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:54:39.260521    5799 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:54:39.265512    5799 out.go:177] * [no-preload-431000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:54:39.272514    5799 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:54:39.272558    5799 notify.go:220] Checking for updates...
	I1003 20:54:39.279545    5799 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:54:39.282505    5799 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:54:39.285532    5799 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:54:39.288531    5799 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:54:39.289866    5799 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:54:39.292827    5799 config.go:182] Loaded profile config "no-preload-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:39.293114    5799 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:54:39.297517    5799 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:54:39.302471    5799 start.go:297] selected driver: qemu2
	I1003 20:54:39.302477    5799 start.go:901] validating driver "qemu2" against &{Name:no-preload-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:no-preload-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:39.302558    5799 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:54:39.305250    5799 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:54:39.305274    5799 cni.go:84] Creating CNI manager for ""
	I1003 20:54:39.305299    5799 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:54:39.305328    5799 start.go:340] cluster config:
	{Name:no-preload-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-431000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:39.309904    5799 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:39.318505    5799 out.go:177] * Starting "no-preload-431000" primary control-plane node in "no-preload-431000" cluster
	I1003 20:54:39.322484    5799 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:54:39.322591    5799 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/no-preload-431000/config.json ...
	I1003 20:54:39.322610    5799 cache.go:107] acquiring lock: {Name:mk4ffe7ca6ed0a1363244dc2b9236fd0b2364712 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:39.322616    5799 cache.go:107] acquiring lock: {Name:mk0044a56e75d5a1ce088d8d746509abcaa87205 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:39.322701    5799 cache.go:115] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1003 20:54:39.322713    5799 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 104.25µs
	I1003 20:54:39.322718    5799 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1003 20:54:39.322725    5799 cache.go:107] acquiring lock: {Name:mke20b6c1096837f109a9750b66e4b40935a5cba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:39.322724    5799 cache.go:107] acquiring lock: {Name:mk10d70e4ad422d71f32a5671445928876c76fb7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:39.322667    5799 cache.go:107] acquiring lock: {Name:mkb710f64fd3f4280bfe4e6fea4d4943ae5a2a28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:39.322768    5799 cache.go:115] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1003 20:54:39.322758    5799 cache.go:107] acquiring lock: {Name:mk3d74714e12244ae2f7ce5ae4bfa811679ad7b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:39.322775    5799 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 197.667µs
	I1003 20:54:39.322782    5799 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1003 20:54:39.322810    5799 cache.go:107] acquiring lock: {Name:mk2e7a86448524caa375962b0868b6b9fda7c511 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:39.322822    5799 cache.go:115] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1003 20:54:39.322859    5799 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 151.958µs
	I1003 20:54:39.322868    5799 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1003 20:54:39.322869    5799 cache.go:115] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1003 20:54:39.322852    5799 cache.go:115] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1003 20:54:39.322874    5799 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 150.083µs
	I1003 20:54:39.322879    5799 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1003 20:54:39.322875    5799 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 128.666µs
	I1003 20:54:39.322919    5799 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1003 20:54:39.322852    5799 cache.go:115] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1003 20:54:39.322928    5799 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 313.708µs
	I1003 20:54:39.322933    5799 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1003 20:54:39.322879    5799 cache.go:107] acquiring lock: {Name:mk43638c432634aba35109c63a94252e65bcb1ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:39.322993    5799 cache.go:115] /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1003 20:54:39.322998    5799 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 199.416µs
	I1003 20:54:39.323002    5799 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1003 20:54:39.323010    5799 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1003 20:54:39.323108    5799 start.go:360] acquireMachinesLock for no-preload-431000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:39.323158    5799 start.go:364] duration metric: took 42.417µs to acquireMachinesLock for "no-preload-431000"
	I1003 20:54:39.323167    5799 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:54:39.323171    5799 fix.go:54] fixHost starting: 
	I1003 20:54:39.323298    5799 fix.go:112] recreateIfNeeded on no-preload-431000: state=Stopped err=<nil>
	W1003 20:54:39.323308    5799 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:54:39.331420    5799 out.go:177] * Restarting existing qemu2 VM for "no-preload-431000" ...
	I1003 20:54:39.332193    5799 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1003 20:54:39.334460    5799 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:39.334530    5799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:df:b3:63:50:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2
	I1003 20:54:39.336569    5799 main.go:141] libmachine: STDOUT: 
	I1003 20:54:39.336591    5799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:39.336621    5799 fix.go:56] duration metric: took 13.44675ms for fixHost
	I1003 20:54:39.336626    5799 start.go:83] releasing machines lock for "no-preload-431000", held for 13.463375ms
	W1003 20:54:39.336635    5799 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:39.336668    5799 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:39.336672    5799 start.go:729] Will try again in 5 seconds ...
	I1003 20:54:41.295262    5799 cache.go:162] opening:  /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1003 20:54:44.336830    5799 start.go:360] acquireMachinesLock for no-preload-431000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:44.578606    5799 start.go:364] duration metric: took 241.673875ms to acquireMachinesLock for "no-preload-431000"
	I1003 20:54:44.578770    5799 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:54:44.578789    5799 fix.go:54] fixHost starting: 
	I1003 20:54:44.579500    5799 fix.go:112] recreateIfNeeded on no-preload-431000: state=Stopped err=<nil>
	W1003 20:54:44.579526    5799 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:54:44.597506    5799 out.go:177] * Restarting existing qemu2 VM for "no-preload-431000" ...
	I1003 20:54:44.604426    5799 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:44.604595    5799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:df:b3:63:50:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/no-preload-431000/disk.qcow2
	I1003 20:54:44.615917    5799 main.go:141] libmachine: STDOUT: 
	I1003 20:54:44.615988    5799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:44.616071    5799 fix.go:56] duration metric: took 37.28275ms for fixHost
	I1003 20:54:44.616098    5799 start.go:83] releasing machines lock for "no-preload-431000", held for 37.456208ms
	W1003 20:54:44.616318    5799 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-431000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-431000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:44.630553    5799 out.go:201] 
	W1003 20:54:44.635832    5799 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:44.635860    5799 out.go:270] * 
	* 
	W1003 20:54:44.638092    5799 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:54:44.647526    5799 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-431000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000: exit status 7 (46.223583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-291000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-291000 create -f testdata/busybox.yaml: exit status 1 (30.971916ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-291000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-291000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000: exit status 7 (32.550666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-291000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000: exit status 7 (34.904708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-431000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000: exit status 7 (35.299208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-431000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-431000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-431000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.83825ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-431000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-431000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000: exit status 7 (32.876458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-291000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-291000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-291000 describe deploy/metrics-server -n kube-system: exit status 1 (29.793834ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-291000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-291000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000: exit status 7 (34.205166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-431000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000: exit status 7 (33.624167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-431000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-431000 --alsologtostderr -v=1: exit status 83 (42.239417ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-431000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-431000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:44.920355    5837 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:44.920541    5837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:44.920544    5837 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:44.920547    5837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:44.920671    5837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:44.920903    5837 out.go:352] Setting JSON to false
	I1003 20:54:44.920912    5837 mustload.go:65] Loading cluster: no-preload-431000
	I1003 20:54:44.921125    5837 config.go:182] Loaded profile config "no-preload-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:44.924427    5837 out.go:177] * The control-plane node no-preload-431000 host is not running: state=Stopped
	I1003 20:54:44.928437    5837 out.go:177]   To start a cluster, run: "minikube start -p no-preload-431000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-431000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000: exit status 7 (31.746833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-431000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000: exit status 7 (30.230208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-431000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-329000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-329000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.821924709s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-329000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-329000" primary control-plane node in "default-k8s-diff-port-329000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-329000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:45.351201    5868 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:45.351350    5868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:45.351354    5868 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:45.351356    5868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:45.351485    5868 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:45.352633    5868 out.go:352] Setting JSON to false
	I1003 20:54:45.370663    5868 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5056,"bootTime":1728009029,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:54:45.370733    5868 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:54:45.375494    5868 out.go:177] * [default-k8s-diff-port-329000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:54:45.382507    5868 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:54:45.382573    5868 notify.go:220] Checking for updates...
	I1003 20:54:45.388410    5868 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:54:45.391490    5868 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:54:45.392792    5868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:54:45.395423    5868 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:54:45.398478    5868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:54:45.401825    5868 config.go:182] Loaded profile config "embed-certs-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:45.401884    5868 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:45.401932    5868 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:54:45.406383    5868 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:54:45.413444    5868 start.go:297] selected driver: qemu2
	I1003 20:54:45.413456    5868 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:54:45.413462    5868 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:54:45.416117    5868 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 20:54:45.419493    5868 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:54:45.422554    5868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:54:45.422585    5868 cni.go:84] Creating CNI manager for ""
	I1003 20:54:45.422608    5868 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:54:45.422619    5868 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:54:45.422649    5868 start.go:340] cluster config:
	{Name:default-k8s-diff-port-329000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-329000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:45.427346    5868 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:45.435469    5868 out.go:177] * Starting "default-k8s-diff-port-329000" primary control-plane node in "default-k8s-diff-port-329000" cluster
	I1003 20:54:45.439426    5868 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:54:45.439444    5868 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:54:45.439454    5868 cache.go:56] Caching tarball of preloaded images
	I1003 20:54:45.439541    5868 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:54:45.439547    5868 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:54:45.439621    5868 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/default-k8s-diff-port-329000/config.json ...
	I1003 20:54:45.439632    5868 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/default-k8s-diff-port-329000/config.json: {Name:mka54e1362befdb501705eadc868b9e2d94b1e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:54:45.440007    5868 start.go:360] acquireMachinesLock for default-k8s-diff-port-329000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:45.440061    5868 start.go:364] duration metric: took 46.25µs to acquireMachinesLock for "default-k8s-diff-port-329000"
	I1003 20:54:45.440076    5868 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-329000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-329000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:54:45.440104    5868 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:54:45.448469    5868 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:54:45.466735    5868 start.go:159] libmachine.API.Create for "default-k8s-diff-port-329000" (driver="qemu2")
	I1003 20:54:45.466770    5868 client.go:168] LocalClient.Create starting
	I1003 20:54:45.466845    5868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:54:45.466887    5868 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:45.466898    5868 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:45.466946    5868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:54:45.466976    5868 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:45.466984    5868 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:45.467481    5868 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:54:45.608059    5868 main.go:141] libmachine: Creating SSH key...
	I1003 20:54:45.661732    5868 main.go:141] libmachine: Creating Disk image...
	I1003 20:54:45.661738    5868 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:54:45.661947    5868 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2
	I1003 20:54:45.671928    5868 main.go:141] libmachine: STDOUT: 
	I1003 20:54:45.671950    5868 main.go:141] libmachine: STDERR: 
	I1003 20:54:45.672001    5868 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2 +20000M
	I1003 20:54:45.680659    5868 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:54:45.680678    5868 main.go:141] libmachine: STDERR: 
	I1003 20:54:45.680700    5868 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2
	I1003 20:54:45.680708    5868 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:54:45.680722    5868 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:45.680749    5868 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:ff:6c:48:3a:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2
	I1003 20:54:45.682556    5868 main.go:141] libmachine: STDOUT: 
	I1003 20:54:45.682568    5868 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:45.682594    5868 client.go:171] duration metric: took 215.816584ms to LocalClient.Create
	I1003 20:54:47.684845    5868 start.go:128] duration metric: took 2.244707208s to createHost
	I1003 20:54:47.684915    5868 start.go:83] releasing machines lock for "default-k8s-diff-port-329000", held for 2.244844458s
	W1003 20:54:47.684960    5868 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:47.696373    5868 out.go:177] * Deleting "default-k8s-diff-port-329000" in qemu2 ...
	W1003 20:54:47.719279    5868 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:47.719318    5868 start.go:729] Will try again in 5 seconds ...
	I1003 20:54:52.721605    5868 start.go:360] acquireMachinesLock for default-k8s-diff-port-329000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:52.722242    5868 start.go:364] duration metric: took 509.541µs to acquireMachinesLock for "default-k8s-diff-port-329000"
	I1003 20:54:52.722377    5868 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-329000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-329000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:54:52.722628    5868 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:54:52.729234    5868 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:54:52.776019    5868 start.go:159] libmachine.API.Create for "default-k8s-diff-port-329000" (driver="qemu2")
	I1003 20:54:52.776084    5868 client.go:168] LocalClient.Create starting
	I1003 20:54:52.776242    5868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:54:52.776323    5868 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:52.776351    5868 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:52.776425    5868 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:54:52.776483    5868 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:52.776497    5868 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:52.777246    5868 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:54:52.931607    5868 main.go:141] libmachine: Creating SSH key...
	I1003 20:54:53.077907    5868 main.go:141] libmachine: Creating Disk image...
	I1003 20:54:53.077914    5868 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:54:53.078133    5868 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2
	I1003 20:54:53.088543    5868 main.go:141] libmachine: STDOUT: 
	I1003 20:54:53.088557    5868 main.go:141] libmachine: STDERR: 
	I1003 20:54:53.088608    5868 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2 +20000M
	I1003 20:54:53.097240    5868 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:54:53.097255    5868 main.go:141] libmachine: STDERR: 
	I1003 20:54:53.097269    5868 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2
	I1003 20:54:53.097274    5868 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:54:53.097282    5868 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:53.097307    5868 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:68:aa:97:40:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2
	I1003 20:54:53.099142    5868 main.go:141] libmachine: STDOUT: 
	I1003 20:54:53.099155    5868 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:53.099167    5868 client.go:171] duration metric: took 323.07725ms to LocalClient.Create
	I1003 20:54:55.101339    5868 start.go:128] duration metric: took 2.378681542s to createHost
	I1003 20:54:55.101516    5868 start.go:83] releasing machines lock for "default-k8s-diff-port-329000", held for 2.37913475s
	W1003 20:54:55.101857    5868 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-329000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-329000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:55.115387    5868 out.go:201] 
	W1003 20:54:55.118453    5868 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:55.118474    5868 out.go:270] * 
	* 
	W1003 20:54:55.121433    5868 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:54:55.129358    5868 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-329000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000: exit status 7 (69.008667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-329000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-291000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-291000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.576749292s)

                                                
                                                
-- stdout --
	* [embed-certs-291000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-291000" primary control-plane node in "embed-certs-291000" cluster
	* Restarting existing qemu2 VM for "embed-certs-291000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-291000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:48.617846    5894 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:48.617986    5894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:48.617989    5894 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:48.617992    5894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:48.618124    5894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:48.619136    5894 out.go:352] Setting JSON to false
	I1003 20:54:48.636690    5894 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5059,"bootTime":1728009029,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:54:48.636788    5894 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:54:48.642058    5894 out.go:177] * [embed-certs-291000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:54:48.648056    5894 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:54:48.648104    5894 notify.go:220] Checking for updates...
	I1003 20:54:48.654974    5894 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:54:48.658006    5894 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:54:48.660970    5894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:54:48.662284    5894 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:54:48.664929    5894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:54:48.668320    5894 config.go:182] Loaded profile config "embed-certs-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:48.668566    5894 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:54:48.672771    5894 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:54:48.679976    5894 start.go:297] selected driver: qemu2
	I1003 20:54:48.679987    5894 start.go:901] validating driver "qemu2" against &{Name:embed-certs-291000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:embed-certs-291000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:48.680055    5894 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:54:48.682575    5894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:54:48.682601    5894 cni.go:84] Creating CNI manager for ""
	I1003 20:54:48.682632    5894 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:54:48.682653    5894 start.go:340] cluster config:
	{Name:embed-certs-291000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-291000 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:48.687204    5894 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:48.690977    5894 out.go:177] * Starting "embed-certs-291000" primary control-plane node in "embed-certs-291000" cluster
	I1003 20:54:48.694927    5894 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:54:48.694943    5894 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:54:48.694954    5894 cache.go:56] Caching tarball of preloaded images
	I1003 20:54:48.695045    5894 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:54:48.695051    5894 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:54:48.695114    5894 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/embed-certs-291000/config.json ...
	I1003 20:54:48.695491    5894 start.go:360] acquireMachinesLock for embed-certs-291000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:48.695537    5894 start.go:364] duration metric: took 39.833µs to acquireMachinesLock for "embed-certs-291000"
	I1003 20:54:48.695545    5894 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:54:48.695551    5894 fix.go:54] fixHost starting: 
	I1003 20:54:48.695664    5894 fix.go:112] recreateIfNeeded on embed-certs-291000: state=Stopped err=<nil>
	W1003 20:54:48.695673    5894 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:54:48.703957    5894 out.go:177] * Restarting existing qemu2 VM for "embed-certs-291000" ...
	I1003 20:54:48.707983    5894 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:48.708021    5894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:ad:71:6d:c4:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2
	I1003 20:54:48.710160    5894 main.go:141] libmachine: STDOUT: 
	I1003 20:54:48.710177    5894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:48.710208    5894 fix.go:56] duration metric: took 14.656834ms for fixHost
	I1003 20:54:48.710214    5894 start.go:83] releasing machines lock for "embed-certs-291000", held for 14.672542ms
	W1003 20:54:48.710221    5894 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:48.710253    5894 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:48.710258    5894 start.go:729] Will try again in 5 seconds ...
	I1003 20:54:53.712460    5894 start.go:360] acquireMachinesLock for embed-certs-291000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:55.101700    5894 start.go:364] duration metric: took 1.389116041s to acquireMachinesLock for "embed-certs-291000"
	I1003 20:54:55.101875    5894 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:54:55.101891    5894 fix.go:54] fixHost starting: 
	I1003 20:54:55.102587    5894 fix.go:112] recreateIfNeeded on embed-certs-291000: state=Stopped err=<nil>
	W1003 20:54:55.102620    5894 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:54:55.115373    5894 out.go:177] * Restarting existing qemu2 VM for "embed-certs-291000" ...
	I1003 20:54:55.118404    5894 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:55.118595    5894 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:ad:71:6d:c4:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/embed-certs-291000/disk.qcow2
	I1003 20:54:55.129070    5894 main.go:141] libmachine: STDOUT: 
	I1003 20:54:55.129122    5894 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:55.129200    5894 fix.go:56] duration metric: took 27.309625ms for fixHost
	I1003 20:54:55.129222    5894 start.go:83] releasing machines lock for "embed-certs-291000", held for 27.480292ms
	W1003 20:54:55.129419    5894 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-291000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-291000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:55.143418    5894 out.go:201] 
	W1003 20:54:55.147897    5894 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:55.147932    5894 out.go:270] * 
	* 
	W1003 20:54:55.150268    5894 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:54:55.157287    5894 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-291000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000: exit status 7 (54.085708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-329000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-329000 create -f testdata/busybox.yaml: exit status 1 (30.764459ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-329000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-329000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000: exit status 7 (34.012125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-329000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000: exit status 7 (36.446416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-329000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-291000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000: exit status 7 (34.809541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-291000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-291000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-291000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.582875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-291000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-291000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000: exit status 7 (32.567875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-329000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-329000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-329000 describe deploy/metrics-server -n kube-system: exit status 1 (28.99025ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-329000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-329000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000: exit status 7 (34.242542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-329000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-291000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000: exit status 7 (32.746833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-291000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-291000 --alsologtostderr -v=1: exit status 83 (50.869875ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-291000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-291000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:55.441791    5927 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:55.441964    5927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:55.441967    5927 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:55.441969    5927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:55.442088    5927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:55.442296    5927 out.go:352] Setting JSON to false
	I1003 20:54:55.442307    5927 mustload.go:65] Loading cluster: embed-certs-291000
	I1003 20:54:55.442535    5927 config.go:182] Loaded profile config "embed-certs-291000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:55.447313    5927 out.go:177] * The control-plane node embed-certs-291000 host is not running: state=Stopped
	I1003 20:54:55.454352    5927 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-291000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-291000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000: exit status 7 (32.028334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-291000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000: exit status 7 (29.525958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-291000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.923161667s)

                                                
                                                
-- stdout --
	* [newest-cni-384000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-384000" primary control-plane node in "newest-cni-384000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-384000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:55.765973    5950 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:55.766132    5950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:55.766136    5950 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:55.766138    5950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:55.766271    5950 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:55.767455    5950 out.go:352] Setting JSON to false
	I1003 20:54:55.785197    5950 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5066,"bootTime":1728009029,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:54:55.785318    5950 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:54:55.790359    5950 out.go:177] * [newest-cni-384000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:54:55.797452    5950 notify.go:220] Checking for updates...
	I1003 20:54:55.801335    5950 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:54:55.809330    5950 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:54:55.813333    5950 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:54:55.816357    5950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:54:55.819357    5950 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:54:55.822312    5950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:54:55.825708    5950 config.go:182] Loaded profile config "default-k8s-diff-port-329000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:55.825768    5950 config.go:182] Loaded profile config "multinode-817000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:55.825821    5950 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:54:55.830374    5950 out.go:177] * Using the qemu2 driver based on user configuration
	I1003 20:54:55.837322    5950 start.go:297] selected driver: qemu2
	I1003 20:54:55.837329    5950 start.go:901] validating driver "qemu2" against <nil>
	I1003 20:54:55.837336    5950 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:54:55.839811    5950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1003 20:54:55.839850    5950 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1003 20:54:55.847348    5950 out.go:177] * Automatically selected the socket_vmnet network
	I1003 20:54:55.851472    5950 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1003 20:54:55.851492    5950 cni.go:84] Creating CNI manager for ""
	I1003 20:54:55.851516    5950 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:54:55.851525    5950 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 20:54:55.851554    5950 start.go:340] cluster config:
	{Name:newest-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:55.856482    5950 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:55.860370    5950 out.go:177] * Starting "newest-cni-384000" primary control-plane node in "newest-cni-384000" cluster
	I1003 20:54:55.868429    5950 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:54:55.868447    5950 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:54:55.868461    5950 cache.go:56] Caching tarball of preloaded images
	I1003 20:54:55.868569    5950 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:54:55.868575    5950 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:54:55.868638    5950 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/newest-cni-384000/config.json ...
	I1003 20:54:55.868650    5950 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/newest-cni-384000/config.json: {Name:mk14ac4975484b0248241838ef3474dcbdcb8061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 20:54:55.869020    5950 start.go:360] acquireMachinesLock for newest-cni-384000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:55.869076    5950 start.go:364] duration metric: took 49.459µs to acquireMachinesLock for "newest-cni-384000"
	I1003 20:54:55.869089    5950 start.go:93] Provisioning new machine with config: &{Name:newest-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:newest-cni-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:54:55.869125    5950 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:54:55.873424    5950 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:54:55.892718    5950 start.go:159] libmachine.API.Create for "newest-cni-384000" (driver="qemu2")
	I1003 20:54:55.892746    5950 client.go:168] LocalClient.Create starting
	I1003 20:54:55.892826    5950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:54:55.892870    5950 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:55.892881    5950 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:55.892934    5950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:54:55.892967    5950 main.go:141] libmachine: Decoding PEM data...
	I1003 20:54:55.892974    5950 main.go:141] libmachine: Parsing certificate...
	I1003 20:54:55.893416    5950 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:54:56.031682    5950 main.go:141] libmachine: Creating SSH key...
	I1003 20:54:56.236962    5950 main.go:141] libmachine: Creating Disk image...
	I1003 20:54:56.236972    5950 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:54:56.237194    5950 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2
	I1003 20:54:56.247059    5950 main.go:141] libmachine: STDOUT: 
	I1003 20:54:56.247075    5950 main.go:141] libmachine: STDERR: 
	I1003 20:54:56.247136    5950 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2 +20000M
	I1003 20:54:56.255473    5950 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:54:56.255487    5950 main.go:141] libmachine: STDERR: 
	I1003 20:54:56.255508    5950 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2
	I1003 20:54:56.255515    5950 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:54:56.255535    5950 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:56.255577    5950 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:7d:a8:e4:24:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2
	I1003 20:54:56.257446    5950 main.go:141] libmachine: STDOUT: 
	I1003 20:54:56.257460    5950 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:56.257478    5950 client.go:171] duration metric: took 364.726041ms to LocalClient.Create
	I1003 20:54:58.259672    5950 start.go:128] duration metric: took 2.390518958s to createHost
	I1003 20:54:58.259744    5950 start.go:83] releasing machines lock for "newest-cni-384000", held for 2.390656417s
	W1003 20:54:58.259800    5950 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:58.277235    5950 out.go:177] * Deleting "newest-cni-384000" in qemu2 ...
	W1003 20:54:58.304757    5950 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:58.304792    5950 start.go:729] Will try again in 5 seconds ...
	I1003 20:55:03.306957    5950 start.go:360] acquireMachinesLock for newest-cni-384000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:55:03.315360    5950 start.go:364] duration metric: took 8.326459ms to acquireMachinesLock for "newest-cni-384000"
	I1003 20:55:03.315423    5950 start.go:93] Provisioning new machine with config: &{Name:newest-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:newest-cni-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1003 20:55:03.315625    5950 start.go:125] createHost starting for "" (driver="qemu2")
	I1003 20:55:03.323563    5950 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1003 20:55:03.371865    5950 start.go:159] libmachine.API.Create for "newest-cni-384000" (driver="qemu2")
	I1003 20:55:03.371931    5950 client.go:168] LocalClient.Create starting
	I1003 20:55:03.372081    5950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/ca.pem
	I1003 20:55:03.372160    5950 main.go:141] libmachine: Decoding PEM data...
	I1003 20:55:03.372177    5950 main.go:141] libmachine: Parsing certificate...
	I1003 20:55:03.372236    5950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19546-1040/.minikube/certs/cert.pem
	I1003 20:55:03.372292    5950 main.go:141] libmachine: Decoding PEM data...
	I1003 20:55:03.372306    5950 main.go:141] libmachine: Parsing certificate...
	I1003 20:55:03.372827    5950 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1003 20:55:03.526095    5950 main.go:141] libmachine: Creating SSH key...
	I1003 20:55:03.599494    5950 main.go:141] libmachine: Creating Disk image...
	I1003 20:55:03.599506    5950 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1003 20:55:03.599732    5950 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2.raw /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2
	I1003 20:55:03.610023    5950 main.go:141] libmachine: STDOUT: 
	I1003 20:55:03.610048    5950 main.go:141] libmachine: STDERR: 
	I1003 20:55:03.610120    5950 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2 +20000M
	I1003 20:55:03.620005    5950 main.go:141] libmachine: STDOUT: Image resized.
	
	I1003 20:55:03.620028    5950 main.go:141] libmachine: STDERR: 
	I1003 20:55:03.620045    5950 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2
	I1003 20:55:03.620052    5950 main.go:141] libmachine: Starting QEMU VM...
	I1003 20:55:03.620061    5950 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:55:03.620107    5950 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:14:57:8e:85:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2
	I1003 20:55:03.622052    5950 main.go:141] libmachine: STDOUT: 
	I1003 20:55:03.622068    5950 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:55:03.622081    5950 client.go:171] duration metric: took 250.144542ms to LocalClient.Create
	I1003 20:55:05.624299    5950 start.go:128] duration metric: took 2.308628042s to createHost
	I1003 20:55:05.624399    5950 start.go:83] releasing machines lock for "newest-cni-384000", held for 2.309011708s
	W1003 20:55:05.624824    5950 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-384000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:55:05.634494    5950 out.go:201] 
	W1003 20:55:05.638544    5950 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:55:05.638572    5950 out.go:270] * 
	* 
	W1003 20:55:05.641552    5950 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:55:05.651511    5950 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (69.694375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-329000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-329000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.676691625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-329000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-329000" primary control-plane node in "default-k8s-diff-port-329000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-329000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-329000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:54:57.707613    5972 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:54:57.707772    5972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:57.707775    5972 out.go:358] Setting ErrFile to fd 2...
	I1003 20:54:57.707777    5972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:54:57.707905    5972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:54:57.709053    5972 out.go:352] Setting JSON to false
	I1003 20:54:57.726788    5972 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5068,"bootTime":1728009029,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:54:57.726853    5972 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:54:57.730897    5972 out.go:177] * [default-k8s-diff-port-329000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:54:57.737900    5972 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:54:57.737948    5972 notify.go:220] Checking for updates...
	I1003 20:54:57.744939    5972 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:54:57.747957    5972 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:54:57.750916    5972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:54:57.753933    5972 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:54:57.756940    5972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:54:57.760183    5972 config.go:182] Loaded profile config "default-k8s-diff-port-329000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:54:57.760448    5972 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:54:57.764887    5972 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:54:57.770795    5972 start.go:297] selected driver: qemu2
	I1003 20:54:57.770801    5972 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-329000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-329000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:57.770844    5972 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:54:57.773317    5972 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 20:54:57.773339    5972 cni.go:84] Creating CNI manager for ""
	I1003 20:54:57.773360    5972 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:54:57.773387    5972 start.go:340] cluster config:
	{Name:default-k8s-diff-port-329000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-329000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:54:57.778038    5972 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:54:57.785973    5972 out.go:177] * Starting "default-k8s-diff-port-329000" primary control-plane node in "default-k8s-diff-port-329000" cluster
	I1003 20:54:57.791940    5972 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:54:57.791957    5972 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:54:57.791965    5972 cache.go:56] Caching tarball of preloaded images
	I1003 20:54:57.792029    5972 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:54:57.792035    5972 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:54:57.792104    5972 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/default-k8s-diff-port-329000/config.json ...
	I1003 20:54:57.792568    5972 start.go:360] acquireMachinesLock for default-k8s-diff-port-329000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:54:58.259914    5972 start.go:364] duration metric: took 467.251084ms to acquireMachinesLock for "default-k8s-diff-port-329000"
	I1003 20:54:58.260011    5972 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:54:58.260039    5972 fix.go:54] fixHost starting: 
	I1003 20:54:58.260737    5972 fix.go:112] recreateIfNeeded on default-k8s-diff-port-329000: state=Stopped err=<nil>
	W1003 20:54:58.260776    5972 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:54:58.269272    5972 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-329000" ...
	I1003 20:54:58.281267    5972 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:54:58.281473    5972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:68:aa:97:40:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2
	I1003 20:54:58.294069    5972 main.go:141] libmachine: STDOUT: 
	I1003 20:54:58.294132    5972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:54:58.294287    5972 fix.go:56] duration metric: took 34.223375ms for fixHost
	I1003 20:54:58.294301    5972 start.go:83] releasing machines lock for "default-k8s-diff-port-329000", held for 34.330916ms
	W1003 20:54:58.294329    5972 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:54:58.294479    5972 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:54:58.294496    5972 start.go:729] Will try again in 5 seconds ...
	I1003 20:55:03.296725    5972 start.go:360] acquireMachinesLock for default-k8s-diff-port-329000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:55:03.297212    5972 start.go:364] duration metric: took 420.625µs to acquireMachinesLock for "default-k8s-diff-port-329000"
	I1003 20:55:03.297336    5972 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:55:03.297359    5972 fix.go:54] fixHost starting: 
	I1003 20:55:03.298156    5972 fix.go:112] recreateIfNeeded on default-k8s-diff-port-329000: state=Stopped err=<nil>
	W1003 20:55:03.298182    5972 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:55:03.300649    5972 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-329000" ...
	I1003 20:55:03.304481    5972 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:55:03.304653    5972 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:68:aa:97:40:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/default-k8s-diff-port-329000/disk.qcow2
	I1003 20:55:03.315140    5972 main.go:141] libmachine: STDOUT: 
	I1003 20:55:03.315200    5972 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:55:03.315286    5972 fix.go:56] duration metric: took 17.932292ms for fixHost
	I1003 20:55:03.315303    5972 start.go:83] releasing machines lock for "default-k8s-diff-port-329000", held for 18.070792ms
	W1003 20:55:03.315505    5972 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-329000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-329000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:55:03.330553    5972 out.go:201] 
	W1003 20:55:03.334600    5972 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:55:03.334624    5972 out.go:270] * 
	* 
	W1003 20:55:03.336533    5972 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:55:03.345551    5972 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-329000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000: exit status 7 (54.884083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-329000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-329000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000: exit status 7 (36.920833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-329000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-329000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-329000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-329000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (31.6995ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-329000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-329000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000: exit status 7 (37.59225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-329000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-329000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000: exit status 7 (32.1975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-329000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-329000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-329000 --alsologtostderr -v=1: exit status 83 (43.318958ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-329000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-329000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:55:03.625143    5992 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:55:03.625317    5992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:55:03.625321    5992 out.go:358] Setting ErrFile to fd 2...
	I1003 20:55:03.625324    5992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:55:03.625449    5992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:55:03.625664    5992 out.go:352] Setting JSON to false
	I1003 20:55:03.625673    5992 mustload.go:65] Loading cluster: default-k8s-diff-port-329000
	I1003 20:55:03.625889    5992 config.go:182] Loaded profile config "default-k8s-diff-port-329000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:55:03.630555    5992 out.go:177] * The control-plane node default-k8s-diff-port-329000 host is not running: state=Stopped
	I1003 20:55:03.633456    5992 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-329000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-329000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000: exit status 7 (30.509125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-329000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000: exit status 7 (31.008375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-329000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.1870065s)

                                                
                                                
-- stdout --
	* [newest-cni-384000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-384000" primary control-plane node in "newest-cni-384000" cluster
	* Restarting existing qemu2 VM for "newest-cni-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-384000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:55:07.971648    6032 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:55:07.971785    6032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:55:07.971788    6032 out.go:358] Setting ErrFile to fd 2...
	I1003 20:55:07.971790    6032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:55:07.971937    6032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:55:07.973011    6032 out.go:352] Setting JSON to false
	I1003 20:55:07.990592    6032 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5078,"bootTime":1728009029,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:55:07.990655    6032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:55:07.995772    6032 out.go:177] * [newest-cni-384000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:55:08.002734    6032 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:55:08.002793    6032 notify.go:220] Checking for updates...
	I1003 20:55:08.009628    6032 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:55:08.012692    6032 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:55:08.015744    6032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:55:08.018611    6032 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:55:08.021681    6032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:55:08.025063    6032 config.go:182] Loaded profile config "newest-cni-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:55:08.025366    6032 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:55:08.028662    6032 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:55:08.035692    6032 start.go:297] selected driver: qemu2
	I1003 20:55:08.035697    6032 start.go:901] validating driver "qemu2" against &{Name:newest-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:newest-cni-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:55:08.035752    6032 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:55:08.038252    6032 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1003 20:55:08.038279    6032 cni.go:84] Creating CNI manager for ""
	I1003 20:55:08.038304    6032 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 20:55:08.038332    6032 start.go:340] cluster config:
	{Name:newest-cni-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-384000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:55:08.042849    6032 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 20:55:08.050721    6032 out.go:177] * Starting "newest-cni-384000" primary control-plane node in "newest-cni-384000" cluster
	I1003 20:55:08.053612    6032 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 20:55:08.053629    6032 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 20:55:08.053643    6032 cache.go:56] Caching tarball of preloaded images
	I1003 20:55:08.053722    6032 preload.go:172] Found /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1003 20:55:08.053728    6032 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1003 20:55:08.053810    6032 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/newest-cni-384000/config.json ...
	I1003 20:55:08.054215    6032 start.go:360] acquireMachinesLock for newest-cni-384000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:55:08.054263    6032 start.go:364] duration metric: took 42.084µs to acquireMachinesLock for "newest-cni-384000"
	I1003 20:55:08.054271    6032 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:55:08.054275    6032 fix.go:54] fixHost starting: 
	I1003 20:55:08.054393    6032 fix.go:112] recreateIfNeeded on newest-cni-384000: state=Stopped err=<nil>
	W1003 20:55:08.054402    6032 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:55:08.058707    6032 out.go:177] * Restarting existing qemu2 VM for "newest-cni-384000" ...
	I1003 20:55:08.065651    6032 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:55:08.065699    6032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:14:57:8e:85:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2
	I1003 20:55:08.067933    6032 main.go:141] libmachine: STDOUT: 
	I1003 20:55:08.067952    6032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:55:08.067981    6032 fix.go:56] duration metric: took 13.703041ms for fixHost
	I1003 20:55:08.067993    6032 start.go:83] releasing machines lock for "newest-cni-384000", held for 13.719333ms
	W1003 20:55:08.068001    6032 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:55:08.068046    6032 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:55:08.068051    6032 start.go:729] Will try again in 5 seconds ...
	I1003 20:55:13.069878    6032 start.go:360] acquireMachinesLock for newest-cni-384000: {Name:mkaeb8d4c84fa9b2d22c3aafb06fc4eafb6e3c75 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1003 20:55:13.070379    6032 start.go:364] duration metric: took 417.791µs to acquireMachinesLock for "newest-cni-384000"
	I1003 20:55:13.070503    6032 start.go:96] Skipping create...Using existing machine configuration
	I1003 20:55:13.070522    6032 fix.go:54] fixHost starting: 
	I1003 20:55:13.071337    6032 fix.go:112] recreateIfNeeded on newest-cni-384000: state=Stopped err=<nil>
	W1003 20:55:13.071364    6032 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 20:55:13.079792    6032 out.go:177] * Restarting existing qemu2 VM for "newest-cni-384000" ...
	I1003 20:55:13.083959    6032 qemu.go:418] Using hvf for hardware acceleration
	I1003 20:55:13.084121    6032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:14:57:8e:85:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19546-1040/.minikube/machines/newest-cni-384000/disk.qcow2
	I1003 20:55:13.094942    6032 main.go:141] libmachine: STDOUT: 
	I1003 20:55:13.094996    6032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1003 20:55:13.095085    6032 fix.go:56] duration metric: took 24.552042ms for fixHost
	I1003 20:55:13.095104    6032 start.go:83] releasing machines lock for "newest-cni-384000", held for 24.7035ms
	W1003 20:55:13.095271    6032 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-384000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-384000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1003 20:55:13.101905    6032 out.go:201] 
	W1003 20:55:13.105981    6032 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1003 20:55:13.106004    6032 out.go:270] * 
	* 
	W1003 20:55:13.108334    6032 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 20:55:13.117025    6032 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-384000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (73.4015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-384000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (31.798792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-384000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-384000 --alsologtostderr -v=1: exit status 83 (43.650292ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-384000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-384000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:55:13.307342    6046 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:55:13.307534    6046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:55:13.307537    6046 out.go:358] Setting ErrFile to fd 2...
	I1003 20:55:13.307539    6046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:55:13.307662    6046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:55:13.307882    6046 out.go:352] Setting JSON to false
	I1003 20:55:13.307890    6046 mustload.go:65] Loading cluster: newest-cni-384000
	I1003 20:55:13.308111    6046 config.go:182] Loaded profile config "newest-cni-384000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:55:13.312435    6046 out.go:177] * The control-plane node newest-cni-384000 host is not running: state=Stopped
	I1003 20:55:13.316435    6046 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-384000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-384000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (31.761208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-384000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (31.746291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-384000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (155/275)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 17.46
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.34
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 253.69
29 TestAddons/serial/Volcano 38.85
31 TestAddons/serial/GCPAuth/Namespaces 0.08
34 TestAddons/parallel/Registry 19.53
35 TestAddons/parallel/Ingress 17.74
36 TestAddons/parallel/InspektorGadget 10.24
37 TestAddons/parallel/Logviewer 6.21
38 TestAddons/parallel/MetricsServer 5.29
40 TestAddons/parallel/CSI 53.51
41 TestAddons/parallel/Headlamp 16.7
42 TestAddons/parallel/CloudSpanner 5.21
43 TestAddons/parallel/LocalPath 40.96
44 TestAddons/parallel/NvidiaDevicePlugin 6.16
45 TestAddons/parallel/Yakd 11.24
46 TestAddons/StoppedEnableDisable 12.35
54 TestHyperKitDriverInstallOrUpdate 10.99
57 TestErrorSpam/setup 34.37
58 TestErrorSpam/start 0.35
59 TestErrorSpam/status 0.25
60 TestErrorSpam/pause 0.7
61 TestErrorSpam/unpause 0.62
62 TestErrorSpam/stop 64.31
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 49.82
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 36.65
69 TestFunctional/serial/KubeContext 0.03
70 TestFunctional/serial/KubectlGetPods 0.04
73 TestFunctional/serial/CacheCmd/cache/add_remote 9.42
74 TestFunctional/serial/CacheCmd/cache/add_local 1.65
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
76 TestFunctional/serial/CacheCmd/cache/list 0.04
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
78 TestFunctional/serial/CacheCmd/cache/cache_reload 2.18
79 TestFunctional/serial/CacheCmd/cache/delete 0.07
80 TestFunctional/serial/MinikubeKubectlCmd 0.82
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
82 TestFunctional/serial/ExtraConfig 62.77
83 TestFunctional/serial/ComponentHealth 0.04
84 TestFunctional/serial/LogsCmd 0.69
85 TestFunctional/serial/LogsFileCmd 0.64
86 TestFunctional/serial/InvalidService 4.1
88 TestFunctional/parallel/ConfigCmd 0.24
89 TestFunctional/parallel/DashboardCmd 8.07
90 TestFunctional/parallel/DryRun 0.23
91 TestFunctional/parallel/InternationalLanguage 0.11
92 TestFunctional/parallel/StatusCmd 0.25
97 TestFunctional/parallel/AddonsCmd 0.1
98 TestFunctional/parallel/PersistentVolumeClaim 25.94
100 TestFunctional/parallel/SSHCmd 0.13
101 TestFunctional/parallel/CpCmd 0.43
103 TestFunctional/parallel/FileSync 0.07
104 TestFunctional/parallel/CertSync 0.43
108 TestFunctional/parallel/NodeLabels 0.05
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
112 TestFunctional/parallel/License 1.4
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.07
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
124 TestFunctional/parallel/ServiceCmd/DeployApp 6.11
125 TestFunctional/parallel/ServiceCmd/List 0.31
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
128 TestFunctional/parallel/ServiceCmd/Format 0.1
129 TestFunctional/parallel/ServiceCmd/URL 0.1
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
131 TestFunctional/parallel/ProfileCmd/profile_list 0.13
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.13
133 TestFunctional/parallel/MountCmd/any-port 11.05
134 TestFunctional/parallel/MountCmd/specific-port 1.03
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.85
136 TestFunctional/parallel/Version/short 0.05
137 TestFunctional/parallel/Version/components 0.17
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
142 TestFunctional/parallel/ImageCommands/ImageBuild 4.73
143 TestFunctional/parallel/ImageCommands/Setup 1.65
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.58
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
151 TestFunctional/parallel/DockerEnv/bash 0.29
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.01
161 TestMultiControlPlane/serial/StartCluster 230.44
162 TestMultiControlPlane/serial/DeployApp 10.02
163 TestMultiControlPlane/serial/PingHostFromPods 0.74
164 TestMultiControlPlane/serial/AddWorkerNode 87.19
165 TestMultiControlPlane/serial/NodeLabels 0.15
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.3
167 TestMultiControlPlane/serial/CopyFile 4.16
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 3.72
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
213 TestMainNoArgs 0.03
260 TestStoppedBinaryUpgrade/Setup 4.79
272 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
277 TestNoKubernetes/serial/ProfileList 31.42
278 TestNoKubernetes/serial/Stop 3.7
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.68
295 TestStartStop/group/old-k8s-version/serial/Stop 3.22
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
308 TestStartStop/group/no-preload/serial/Stop 4
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/embed-certs/serial/Stop 3.55
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.1
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
339 TestStartStop/group/newest-cni/serial/Stop 2.02
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1003 19:48:06.073934    1556 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1003 19:48:06.074278    1556 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-360000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-360000: exit status 85 (92.735ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-360000 | jenkins | v1.34.0 | 03 Oct 24 19:47 PDT |          |
	|         | -p download-only-360000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 19:47:27
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:47:27.309002    1557 out.go:345] Setting OutFile to fd 1 ...
	I1003 19:47:27.309156    1557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 19:47:27.309159    1557 out.go:358] Setting ErrFile to fd 2...
	I1003 19:47:27.309162    1557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 19:47:27.309293    1557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	W1003 19:47:27.309397    1557 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19546-1040/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19546-1040/.minikube/config/config.json: no such file or directory
	I1003 19:47:27.310814    1557 out.go:352] Setting JSON to true
	I1003 19:47:27.330098    1557 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1018,"bootTime":1728009029,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 19:47:27.330157    1557 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 19:47:27.335722    1557 out.go:97] [download-only-360000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 19:47:27.335887    1557 notify.go:220] Checking for updates...
	W1003 19:47:27.335896    1557 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball: no such file or directory
	I1003 19:47:27.338688    1557 out.go:169] MINIKUBE_LOCATION=19546
	I1003 19:47:27.339876    1557 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 19:47:27.343721    1557 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 19:47:27.350671    1557 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:47:27.357657    1557 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	W1003 19:47:27.364714    1557 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 19:47:27.364941    1557 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 19:47:27.369630    1557 out.go:97] Using the qemu2 driver based on user configuration
	I1003 19:47:27.369651    1557 start.go:297] selected driver: qemu2
	I1003 19:47:27.369669    1557 start.go:901] validating driver "qemu2" against <nil>
	I1003 19:47:27.369782    1557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 19:47:27.373677    1557 out.go:169] Automatically selected the socket_vmnet network
	I1003 19:47:27.379727    1557 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1003 19:47:27.379851    1557 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 19:47:27.379898    1557 cni.go:84] Creating CNI manager for ""
	I1003 19:47:27.379940    1557 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1003 19:47:27.379984    1557 start.go:340] cluster config:
	{Name:download-only-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:47:27.384764    1557 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:47:27.388737    1557 out.go:97] Downloading VM boot image ...
	I1003 19:47:27.388755    1557 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I1003 19:47:45.037323    1557 out.go:97] Starting "download-only-360000" primary control-plane node in "download-only-360000" cluster
	I1003 19:47:45.037348    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1003 19:47:45.297778    1557 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1003 19:47:45.297899    1557 cache.go:56] Caching tarball of preloaded images
	I1003 19:47:45.298785    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1003 19:47:45.302742    1557 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1003 19:47:45.302768    1557 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 19:47:45.860385    1557 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1003 19:48:04.715150    1557 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 19:48:04.715305    1557 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1003 19:48:05.410015    1557 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1003 19:48:05.410245    1557 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/download-only-360000/config.json ...
	I1003 19:48:05.410262    1557 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/download-only-360000/config.json: {Name:mk177ee186f2f53615699c35126b62254166afca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:48:05.410532    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1003 19:48:05.410775    1557 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1003 19:48:06.028044    1557 out.go:193] 
	W1003 19:48:06.032165    1557 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19546-1040/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0 0x104e756c0] Decompressors:map[bz2:0x1400000f790 gz:0x1400000f798 tar:0x1400000f740 tar.bz2:0x1400000f750 tar.gz:0x1400000f760 tar.xz:0x1400000f770 tar.zst:0x1400000f780 tbz2:0x1400000f750 tgz:0x1400000f760 txz:0x1400000f770 tzst:0x1400000f780 xz:0x1400000f7a0 zip:0x1400000f7b0 zst:0x1400000f7a8] Getters:map[file:0x14000464770 http:0x14000746460 https:0x14000746730] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1003 19:48:06.032193    1557 out_reason.go:110] 
	W1003 19:48:06.039034    1557 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:48:06.043060    1557 out.go:193] 
	
	
	* The control-plane node download-only-360000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-360000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-360000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (17.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-519000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-519000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (17.45706s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (17.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1003 19:48:23.892753    1556 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1003 19:48:23.892809    1556 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-519000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-519000: exit status 85 (75.775834ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-360000 | jenkins | v1.34.0 | 03 Oct 24 19:47 PDT |                     |
	|         | -p download-only-360000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 03 Oct 24 19:48 PDT | 03 Oct 24 19:48 PDT |
	| delete  | -p download-only-360000        | download-only-360000 | jenkins | v1.34.0 | 03 Oct 24 19:48 PDT | 03 Oct 24 19:48 PDT |
	| start   | -o=json --download-only        | download-only-519000 | jenkins | v1.34.0 | 03 Oct 24 19:48 PDT |                     |
	|         | -p download-only-519000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/03 19:48:06
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:48:06.463824    1581 out.go:345] Setting OutFile to fd 1 ...
	I1003 19:48:06.464223    1581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 19:48:06.464228    1581 out.go:358] Setting ErrFile to fd 2...
	I1003 19:48:06.464231    1581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 19:48:06.464441    1581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 19:48:06.465914    1581 out.go:352] Setting JSON to true
	I1003 19:48:06.483886    1581 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1057,"bootTime":1728009029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 19:48:06.483970    1581 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 19:48:06.488708    1581 out.go:97] [download-only-519000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 19:48:06.488810    1581 notify.go:220] Checking for updates...
	I1003 19:48:06.491722    1581 out.go:169] MINIKUBE_LOCATION=19546
	I1003 19:48:06.494733    1581 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 19:48:06.498744    1581 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 19:48:06.501805    1581 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:48:06.504683    1581 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	W1003 19:48:06.510683    1581 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 19:48:06.510879    1581 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 19:48:06.513647    1581 out.go:97] Using the qemu2 driver based on user configuration
	I1003 19:48:06.513657    1581 start.go:297] selected driver: qemu2
	I1003 19:48:06.513661    1581 start.go:901] validating driver "qemu2" against <nil>
	I1003 19:48:06.513719    1581 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1003 19:48:06.516686    1581 out.go:169] Automatically selected the socket_vmnet network
	I1003 19:48:06.522050    1581 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1003 19:48:06.522137    1581 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 19:48:06.522158    1581 cni.go:84] Creating CNI manager for ""
	I1003 19:48:06.522184    1581 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1003 19:48:06.522197    1581 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1003 19:48:06.522228    1581 start.go:340] cluster config:
	{Name:download-only-519000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-519000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:48:06.526613    1581 iso.go:125] acquiring lock: {Name:mk76a49c49067b99577513bbb70fbceab7931be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:48:06.529685    1581 out.go:97] Starting "download-only-519000" primary control-plane node in "download-only-519000" cluster
	I1003 19:48:06.529693    1581 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 19:48:07.138896    1581 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1003 19:48:07.138946    1581 cache.go:56] Caching tarball of preloaded images
	I1003 19:48:07.139767    1581 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1003 19:48:07.144755    1581 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1003 19:48:07.144782    1581 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1003 19:48:07.696625    1581 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19546-1040/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-519000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-519000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-519000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
I1003 19:48:24.406270    1556 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-731000 --alsologtostderr --binary-mirror http://127.0.0.1:49314 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-731000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-731000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-814000
addons_test.go:945: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-814000: exit status 85 (54.555625ms)

                                                
                                                
-- stdout --
	* Profile "addons-814000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-814000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:956: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-814000
addons_test.go:956: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-814000: exit status 85 (58.589042ms)

                                                
                                                
-- stdout --
	* Profile "addons-814000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-814000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (253.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-814000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=logviewer --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-814000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=logviewer --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m13.685378084s)
--- PASS: TestAddons/Setup (253.69s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.85s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:814: volcano-scheduler stabilized in 6.48325ms
addons_test.go:830: volcano-controller stabilized in 6.738958ms
addons_test.go:822: volcano-admission stabilized in 6.852875ms
addons_test.go:836: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-c5lr5" [896fdadc-39ee-4ae3-b499-c5fa81a133f5] Running
addons_test.go:836: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.006291333s
addons_test.go:840: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-mtvt6" [13185c9a-ca7d-432a-9164-f9cb8a5e2ad3] Running
addons_test.go:840: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004404041s
addons_test.go:844: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-qxrmw" [b5b62cb3-8ff3-4470-90ca-4d9b12560755] Running
addons_test.go:844: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.009337125s
addons_test.go:849: (dbg) Run:  kubectl --context addons-814000 delete -n volcano-system job volcano-admission-init
addons_test.go:855: (dbg) Run:  kubectl --context addons-814000 create -f testdata/vcjob.yaml
addons_test.go:863: (dbg) Run:  kubectl --context addons-814000 get vcjob -n my-volcano
addons_test.go:881: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [63e46027-5a15-465c-981b-d4825af76462] Pending
helpers_test.go:344: "test-job-nginx-0" [63e46027-5a15-465c-981b-d4825af76462] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [63e46027-5a15-465c-981b-d4825af76462] Running
addons_test.go:881: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.010727625s
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable volcano --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-arm64 -p addons-814000 addons disable volcano --alsologtostderr -v=1: (10.552103833s)
--- PASS: TestAddons/serial/Volcano (38.85s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:570: (dbg) Run:  kubectl --context addons-814000 create ns new-namespace
addons_test.go:584: (dbg) Run:  kubectl --context addons-814000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:322: registry stabilized in 1.514875ms
addons_test.go:324: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-z4m5l" [296d0802-fccf-4235-80b2-16273fe61f5f] Running
I1003 20:01:26.969236    1556 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1003 20:01:26.969246    1556 kapi.go:107] duration metric: took 34.72975ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:324: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00548s
addons_test.go:327: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jcdkj" [f6a7fa7f-3a03-4087-9893-b5016f32e5a6] Running
addons_test.go:327: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006204666s
addons_test.go:332: (dbg) Run:  kubectl --context addons-814000 delete po -l run=registry-test --now
addons_test.go:337: (dbg) Run:  kubectl --context addons-814000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:337: (dbg) Done: kubectl --context addons-814000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.202222208s)
addons_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 ip
2024/10/03 20:01:46 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:208: (dbg) Run:  kubectl --context addons-814000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:233: (dbg) Run:  kubectl --context addons-814000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:246: (dbg) Run:  kubectl --context addons-814000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:251: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [de0f099b-4f30-44fd-b58e-39502f9ac67e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [de0f099b-4f30-44fd-b58e-39502f9ac67e] Running
addons_test.go:251: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.009764083s
I1003 20:02:59.433742    1556 kapi.go:150] Service nginx in namespace default found.
addons_test.go:263: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:287: (dbg) Run:  kubectl --context addons-814000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:292: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 ip
addons_test.go:298: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable ingress --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-arm64 -p addons-814000 addons disable ingress --alsologtostderr -v=1: (7.358462958s)
--- PASS: TestAddons/parallel/Ingress (17.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:759: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5knfz" [909c1133-9ed3-4ff5-bc19-e7fbdb3e5977] Running
addons_test.go:759: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.002078209s
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-arm64 -p addons-814000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.239982125s)
--- PASS: TestAddons/parallel/InspektorGadget (10.24s)

                                                
                                    
x
+
TestAddons/parallel/Logviewer (6.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Logviewer
=== PAUSE TestAddons/parallel/Logviewer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Logviewer
addons_test.go:769: (dbg) TestAddons/parallel/Logviewer: waiting 8m0s for pods matching "app=logviewer" in namespace "kube-system" ...
helpers_test.go:344: "logviewer-7c79c8bcc9-wf225" [d768d7a3-a275-47dc-9fab-a5e845f11c90] Running
addons_test.go:769: (dbg) TestAddons/parallel/Logviewer: app=logviewer healthy within 6.011629917s
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable logviewer --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Logviewer (6.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:395: metrics-server stabilized in 1.382167ms
addons_test.go:397: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-fdzcg" [a62db7cb-8d54-4094-b268-20d212a3fd51] Running
addons_test.go:397: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010589583s
addons_test.go:403: (dbg) Run:  kubectl --context addons-814000 top pods -n kube-system
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1003 20:01:26.934543    1556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:489: csi-hostpath-driver pods stabilized in 34.733333ms
addons_test.go:492: (dbg) Run:  kubectl --context addons-814000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:497: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:502: (dbg) Run:  kubectl --context addons-814000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:507: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5943a69c-ba4b-4528-890e-1e00cbed859b] Pending
helpers_test.go:344: "task-pv-pod" [5943a69c-ba4b-4528-890e-1e00cbed859b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5943a69c-ba4b-4528-890e-1e00cbed859b] Running
addons_test.go:507: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.005991416s
addons_test.go:512: (dbg) Run:  kubectl --context addons-814000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:517: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-814000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-814000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:522: (dbg) Run:  kubectl --context addons-814000 delete pod task-pv-pod
addons_test.go:522: (dbg) Done: kubectl --context addons-814000 delete pod task-pv-pod: (1.038884s)
addons_test.go:528: (dbg) Run:  kubectl --context addons-814000 delete pvc hpvc
addons_test.go:534: (dbg) Run:  kubectl --context addons-814000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-814000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:549: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [87a766d9-57f1-4bae-9f36-5471be491768] Pending
helpers_test.go:344: "task-pv-pod-restore" [87a766d9-57f1-4bae-9f36-5471be491768] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [87a766d9-57f1-4bae-9f36-5471be491768] Running
addons_test.go:549: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005165375s
addons_test.go:554: (dbg) Run:  kubectl --context addons-814000 delete pod task-pv-pod-restore
addons_test.go:554: (dbg) Done: kubectl --context addons-814000 delete pod task-pv-pod-restore: (1.004403708s)
addons_test.go:558: (dbg) Run:  kubectl --context addons-814000 delete pvc hpvc-restore
addons_test.go:562: (dbg) Run:  kubectl --context addons-814000 delete volumesnapshot new-snapshot-demo
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-arm64 -p addons-814000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.104788375s)
--- PASS: TestAddons/parallel/CSI (53.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:744: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-814000 --alsologtostderr -v=1
addons_test.go:749: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-wfxp5" [464912fa-3265-4c32-948f-60c376309e85] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-wfxp5" [464912fa-3265-4c32-948f-60c376309e85] Running
addons_test.go:749: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.009813458s
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-arm64 -p addons-814000 addons disable headlamp --alsologtostderr -v=1: (5.283118791s)
--- PASS: TestAddons/parallel/Headlamp (16.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:786: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-5sxsx" [b48391a9-048b-450c-85e9-c26863a376d3] Running
addons_test.go:786: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009554958s
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.96s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:894: (dbg) Run:  kubectl --context addons-814000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:900: (dbg) Run:  kubectl --context addons-814000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:904: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:907: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [05b27162-fdc8-45f1-8e54-7cc396155354] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [05b27162-fdc8-45f1-8e54-7cc396155354] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [05b27162-fdc8-45f1-8e54-7cc396155354] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:907: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004508958s
addons_test.go:912: (dbg) Run:  kubectl --context addons-814000 get pvc test-pvc -o=json
addons_test.go:921: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 ssh "cat /opt/local-path-provisioner/pvc-dc543ab6-4f86-4003-a253-96686ecce5bf_default_test-pvc/file1"
addons_test.go:933: (dbg) Run:  kubectl --context addons-814000 delete pod test-local-path
addons_test.go:937: (dbg) Run:  kubectl --context addons-814000 delete pvc test-pvc
addons_test.go:990: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:990: (dbg) Done: out/minikube-darwin-arm64 -p addons-814000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.450983167s)
--- PASS: TestAddons/parallel/LocalPath (40.96s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:969: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4lrk8" [2fe65941-f41c-4670-ace3-4af119efc50d] Running
addons_test.go:969: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0045695s
addons_test.go:972: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-814000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:980: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-jvrzl" [8a39f2e0-2dfc-4c63-aeb4-1ad657a468da] Running
addons_test.go:980: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00533325s
addons_test.go:984: (dbg) Run:  out/minikube-darwin-arm64 -p addons-814000 addons disable yakd --alsologtostderr -v=1
addons_test.go:984: (dbg) Done: out/minikube-darwin-arm64 -p addons-814000 addons disable yakd --alsologtostderr -v=1: (5.235486916s)
--- PASS: TestAddons/parallel/Yakd (11.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-814000
addons_test.go:171: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-814000: (12.197329042s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-814000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-814000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-814000
--- PASS: TestAddons/StoppedEnableDisable (12.35s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.99s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1003 20:40:11.878769    1556 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1003 20:40:11.878969    1556 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1003 20:40:13.903587    1556 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1003 20:40:13.903851    1556 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1003 20:40:13.903894    1556 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/001/docker-machine-driver-hyperkit
I1003 20:40:14.417990    1556 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1040f6d40 0x1040f6d40 0x1040f6d40 0x1040f6d40 0x1040f6d40 0x1040f6d40 0x1040f6d40] Decompressors:map[bz2:0x14000687a50 gz:0x14000687a58 tar:0x14000687a00 tar.bz2:0x14000687a10 tar.gz:0x14000687a20 tar.xz:0x14000687a30 tar.zst:0x14000687a40 tbz2:0x14000687a10 tgz:0x14000687a20 txz:0x14000687a30 tzst:0x14000687a40 xz:0x14000687a60 zip:0x14000687a70 zst:0x14000687a68] Getters:map[file:0x140000368d0 http:0x14000665b80 https:0x14000665c70] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1003 20:40:14.418127    1556 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2739688894/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.99s)

                                                
                                    
x
+
TestErrorSpam/setup (34.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-648000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-648000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 --driver=qemu2 : (34.373738791s)
--- PASS: TestErrorSpam/setup (34.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 pause
--- PASS: TestErrorSpam/pause (0.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (64.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 stop: (12.217262791s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 stop: (26.061245708s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-648000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-648000 stop: (26.03374925s)
--- PASS: TestErrorSpam/stop (64.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19546-1040/.minikube/files/etc/test/nested/copy/1556/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-063000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-063000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (49.820558542s)
--- PASS: TestFunctional/serial/StartWithProxy (49.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1003 20:05:50.949165    1556 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-063000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-063000 --alsologtostderr -v=8: (36.650425958s)
functional_test.go:663: soft start took 36.650909458s for "functional-063000" cluster.
I1003 20:06:27.599550    1556 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (36.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-063000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-063000 cache add registry.k8s.io/pause:3.1: (3.597790792s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-063000 cache add registry.k8s.io/pause:3.3: (3.444429375s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-063000 cache add registry.k8s.io/pause:latest: (2.376733708s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3742539741/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 cache add minikube-local-cache-test:functional-063000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-063000 cache add minikube-local-cache-test:functional-063000: (1.3293895s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 cache delete minikube-local-cache-test:functional-063000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-063000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-063000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (69.411792ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-darwin-arm64 -p functional-063000 cache reload: (1.951899542s)
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 kubectl -- --context functional-063000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.82s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-063000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-063000 get pods: (1.018255s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-063000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1003 20:07:38.501240    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:07:38.508945    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:07:38.522319    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:07:38.544220    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:07:38.587643    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:07:38.671111    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:07:38.834663    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:07:39.158270    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:07:39.802308    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:07:41.086135    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:07:43.649923    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-063000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.769559625s)
functional_test.go:761: restart took 1m2.769637666s for "functional-063000" cluster.
I1003 20:07:45.748891    1556 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (62.77s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-063000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2062924905/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-063000 apply -f testdata/invalidsvc.yaml
E1003 20:07:48.774000    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-063000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-063000: exit status 115 (124.589292ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30875 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-063000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-063000 config get cpus: exit status 14 (34.595958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-063000 config get cpus: exit status 14 (31.354333ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-063000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-063000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2618: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-063000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-063000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.624958ms)

                                                
                                                
-- stdout --
	* [functional-063000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:08:36.589377    2591 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:08:36.589536    2591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:08:36.589539    2591 out.go:358] Setting ErrFile to fd 2...
	I1003 20:08:36.589541    2591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:08:36.589679    2591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:08:36.590791    2591 out.go:352] Setting JSON to false
	I1003 20:08:36.608948    2591 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2287,"bootTime":1728009029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:08:36.609018    2591 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:08:36.613509    2591 out.go:177] * [functional-063000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1003 20:08:36.621538    2591 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:08:36.621620    2591 notify.go:220] Checking for updates...
	I1003 20:08:36.627453    2591 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:08:36.630483    2591 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:08:36.631876    2591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:08:36.634466    2591 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:08:36.637478    2591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:08:36.640838    2591 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:08:36.641091    2591 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:08:36.645496    2591 out.go:177] * Using the qemu2 driver based on existing profile
	I1003 20:08:36.652482    2591 start.go:297] selected driver: qemu2
	I1003 20:08:36.652488    2591 start.go:901] validating driver "qemu2" against &{Name:functional-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:08:36.652556    2591 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:08:36.659465    2591 out.go:201] 
	W1003 20:08:36.663502    2591 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1003 20:08:36.667418    2591 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-063000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-063000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-063000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.707084ms)

                                                
                                                
-- stdout --
	* [functional-063000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 20:08:36.471979    2587 out.go:345] Setting OutFile to fd 1 ...
	I1003 20:08:36.472112    2587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:08:36.472116    2587 out.go:358] Setting ErrFile to fd 2...
	I1003 20:08:36.472118    2587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1003 20:08:36.472240    2587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
	I1003 20:08:36.473770    2587 out.go:352] Setting JSON to false
	I1003 20:08:36.493506    2587 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2287,"bootTime":1728009029,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1003 20:08:36.493596    2587 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1003 20:08:36.498508    2587 out.go:177] * [functional-063000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1003 20:08:36.506314    2587 out.go:177]   - MINIKUBE_LOCATION=19546
	I1003 20:08:36.506342    2587 notify.go:220] Checking for updates...
	I1003 20:08:36.513515    2587 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	I1003 20:08:36.516455    2587 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1003 20:08:36.519469    2587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 20:08:36.522457    2587 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	I1003 20:08:36.523712    2587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 20:08:36.526852    2587 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1003 20:08:36.527099    2587 driver.go:394] Setting default libvirt URI to qemu:///system
	I1003 20:08:36.531452    2587 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1003 20:08:36.536445    2587 start.go:297] selected driver: qemu2
	I1003 20:08:36.536450    2587 start.go:901] validating driver "qemu2" against &{Name:functional-063000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 20:08:36.536491    2587 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 20:08:36.543497    2587 out.go:201] 
	W1003 20:08:36.547454    2587 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 20:08:36.551486    2587 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b5c72771-c572-46a2-b3cc-f40a4c63d36b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012504417s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-063000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-063000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-063000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-063000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6d7289a1-e01c-4bcf-833b-dbd272b5ba5e] Pending
helpers_test.go:344: "sp-pod" [6d7289a1-e01c-4bcf-833b-dbd272b5ba5e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1003 20:07:59.017407    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [6d7289a1-e01c-4bcf-833b-dbd272b5ba5e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004160084s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-063000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-063000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-063000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [58858be6-e415-44ce-ab8b-85af6563f0b0] Pending
helpers_test.go:344: "sp-pod" [58858be6-e415-44ce-ab8b-85af6563f0b0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [58858be6-e415-44ce-ab8b-85af6563f0b0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.009692625s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-063000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh -n functional-063000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 cp functional-063000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd939100816/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh -n functional-063000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh -n functional-063000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1556/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "sudo cat /etc/test/nested/copy/1556/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1556.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "sudo cat /etc/ssl/certs/1556.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1556.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "sudo cat /usr/share/ca-certificates/1556.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "sudo cat /etc/ssl/certs/15562.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "sudo cat /usr/share/ca-certificates/15562.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-063000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-063000 ssh "sudo systemctl is-active crio": exit status 1 (68.782083ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2288: (dbg) Done: out/minikube-darwin-arm64 license: (1.399913167s)
--- PASS: TestFunctional/parallel/License (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-063000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-063000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-063000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-063000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2461: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-063000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-063000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [179ca1cb-6a44-4d32-b768-96e6a371c17c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [179ca1cb-6a44-4d32-b768-96e6a371c17c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003158417s
I1003 20:08:01.946090    1556 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-063000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.87.140 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1003 20:08:02.018522    1556 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1003 20:08:02.118549    1556 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-063000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-063000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-063000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-wcbhb" [9a898e13-3f42-4a57-b032-2cf98100309a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-wcbhb" [9a898e13-3f42-4a57-b032-2cf98100309a] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.023163s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 service list -o json
functional_test.go:1494: Took "291.309583ms" to run "out/minikube-darwin-arm64 -p functional-063000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30568
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30568
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "97.520958ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "35.843042ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "96.042959ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "35.824ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1348374540/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728011305020592000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1348374540/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728011305020592000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1348374540/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728011305020592000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1348374540/001/test-1728011305020592000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.596209ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 20:08:25.081790    1556 retry.go:31] will retry after 254.222583ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.749667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 20:08:25.399025    1556 retry.go:31] will retry after 1.019124922s: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  4 03:08 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  4 03:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  4 03:08 test-1728011305020592000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh cat /mount-9p/test-1728011305020592000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-063000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f] Pending
helpers_test.go:344: "busybox-mount" [69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [69ea2a9f-e4c0-4cbc-89b5-5d1f8f368b1f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003386s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-063000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1348374540/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3725839752/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3725839752/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-063000 ssh "sudo umount -f /mount-9p": exit status 1 (69.940833ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-063000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3725839752/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3059654325/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3059654325/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3059654325/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T" /mount1: exit status 1 (93.56775ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 20:08:37.196833    1556 retry.go:31] will retry after 434.055846ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T" /mount3: exit status 1 (58.458542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 20:08:37.855744    1556 retry.go:31] will retry after 850.358645ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-063000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3059654325/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3059654325/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-063000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3059654325/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-063000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-063000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-063000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-063000 image ls --format short --alsologtostderr:
I1003 20:08:45.374355    2754 out.go:345] Setting OutFile to fd 1 ...
I1003 20:08:45.374563    2754 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:08:45.374569    2754 out.go:358] Setting ErrFile to fd 2...
I1003 20:08:45.374571    2754 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:08:45.374737    2754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
I1003 20:08:45.375215    2754 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:08:45.375280    2754 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:08:45.376167    2754 ssh_runner.go:195] Run: systemctl --version
I1003 20:08:45.376176    2754 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/functional-063000/id_rsa Username:docker}
I1003 20:08:45.408396    2754 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-063000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | 048e090385966 | 197MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/nginx                     | alpine            | 577a23b5858b9 | 50.8MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/kicbase/echo-server               | functional-063000 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-063000 | e93146aeb4659 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-063000 image ls --format table --alsologtostderr:
I1003 20:08:45.666685    2766 out.go:345] Setting OutFile to fd 1 ...
I1003 20:08:45.666922    2766 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:08:45.666925    2766 out.go:358] Setting ErrFile to fd 2...
I1003 20:08:45.666928    2766 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:08:45.667069    2766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
I1003 20:08:45.667544    2766 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:08:45.667613    2766 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:08:45.668633    2766 ssh_runner.go:195] Run: systemctl --version
I1003 20:08:45.668643    2766 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/functional-063000/id_rsa Username:docker}
I1003 20:08:45.695092    2766 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-063000 image ls --format json --alsologtostderr:
[{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-063000"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530f
dd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"50800000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.
io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e93146aeb4659a2c8f1ef19e27f326bfda7fe0f1a8e91fad489ed6553e7c98e2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-063000
"],"size":"30"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-063000 image ls --format json --alsologtostderr:
I1003 20:08:45.592625    2763 out.go:345] Setting OutFile to fd 1 ...
I1003 20:08:45.592818    2763 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:08:45.592822    2763 out.go:358] Setting ErrFile to fd 2...
I1003 20:08:45.592825    2763 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:08:45.592969    2763 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
I1003 20:08:45.593395    2763 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:08:45.593453    2763 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:08:45.594341    2763 ssh_runner.go:195] Run: systemctl --version
I1003 20:08:45.594350    2763 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/functional-063000/id_rsa Username:docker}
I1003 20:08:45.618445    2763 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-063000 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "50800000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-063000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: e93146aeb4659a2c8f1ef19e27f326bfda7fe0f1a8e91fad489ed6553e7c98e2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-063000
size: "30"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-063000 image ls --format yaml --alsologtostderr:
I1003 20:08:45.519813    2759 out.go:345] Setting OutFile to fd 1 ...
I1003 20:08:45.519991    2759 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:08:45.519994    2759 out.go:358] Setting ErrFile to fd 2...
I1003 20:08:45.519996    2759 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:08:45.520132    2759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
I1003 20:08:45.520547    2759 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:08:45.520609    2759 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:08:45.521373    2759 ssh_runner.go:195] Run: systemctl --version
I1003 20:08:45.521387    2759 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/functional-063000/id_rsa Username:docker}
I1003 20:08:45.546792    2759 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-063000 ssh pgrep buildkitd: exit status 1 (64.862292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image build -t localhost/my-image:functional-063000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-063000 image build -t localhost/my-image:functional-063000 testdata/build --alsologtostderr: (4.592448833s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-063000 image build -t localhost/my-image:functional-063000 testdata/build --alsologtostderr:
I1003 20:08:45.581462    2762 out.go:345] Setting OutFile to fd 1 ...
I1003 20:08:45.581800    2762 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:08:45.581804    2762 out.go:358] Setting ErrFile to fd 2...
I1003 20:08:45.581806    2762 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1003 20:08:45.581964    2762 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19546-1040/.minikube/bin
I1003 20:08:45.582465    2762 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:08:45.583261    2762 config.go:182] Loaded profile config "functional-063000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1003 20:08:45.584259    2762 ssh_runner.go:195] Run: systemctl --version
I1003 20:08:45.584271    2762 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19546-1040/.minikube/machines/functional-063000/id_rsa Username:docker}
I1003 20:08:45.610453    2762 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1916232113.tar
I1003 20:08:45.610524    2762 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1003 20:08:45.613930    2762 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1916232113.tar
I1003 20:08:45.615434    2762 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1916232113.tar: stat -c "%s %y" /var/lib/minikube/build/build.1916232113.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1916232113.tar': No such file or directory
I1003 20:08:45.615452    2762 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1916232113.tar --> /var/lib/minikube/build/build.1916232113.tar (3072 bytes)
I1003 20:08:45.625044    2762 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1916232113
I1003 20:08:45.629690    2762 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1916232113 -xf /var/lib/minikube/build/build.1916232113.tar
I1003 20:08:45.633430    2762 docker.go:360] Building image: /var/lib/minikube/build/build.1916232113
I1003 20:08:45.633490    2762 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-063000 /var/lib/minikube/build/build.1916232113
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 1.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 1.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:f701707f5d7efed5b0ec30df0a42f5006a19e468086dddaae540301b23af6f56 done
#8 naming to localhost/my-image:functional-063000 done
#8 DONE 0.0s
I1003 20:08:50.125133    2762 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-063000 /var/lib/minikube/build/build.1916232113: (4.491654875s)
I1003 20:08:50.125397    2762 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1916232113
I1003 20:08:50.129243    2762 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1916232113.tar
I1003 20:08:50.134028    2762 build_images.go:217] Built localhost/my-image:functional-063000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1916232113.tar
I1003 20:08:50.134045    2762 build_images.go:133] succeeded building to: functional-063000
I1003 20:08:50.134049    2762 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.6300945s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-063000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image load --daemon kicbase/echo-server:functional-063000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image load --daemon kicbase/echo-server:functional-063000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-063000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image load --daemon kicbase/echo-server:functional-063000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image save kicbase/echo-server:functional-063000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image rm kicbase/echo-server:functional-063000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
2024/10/03 20:08:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-063000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 image save --daemon kicbase/echo-server:functional-063000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-063000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-063000 docker-env) && out/minikube-darwin-arm64 status -p functional-063000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-063000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-063000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-063000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-063000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-063000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (230.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-006000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E1003 20:09:00.463400    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:10:22.386552    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:12:38.498608    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-006000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m50.260458083s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (230.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-006000 -- rollout status deployment/busybox: (8.087539375s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-cbhnm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-gdc64 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-wcpdx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-cbhnm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-gdc64 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-wcpdx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-cbhnm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-gdc64 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-wcpdx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-cbhnm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-cbhnm -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-gdc64 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-gdc64 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-wcpdx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec busybox-7dff88458-wcpdx -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (87.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-006000 -v=7 --alsologtostderr
E1003 20:12:51.651665    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:12:51.658287    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:12:51.669735    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:12:51.692495    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:12:51.734341    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:12:51.815848    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:12:51.979278    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:12:52.302503    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:12:52.945927    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:12:54.227646    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:12:56.791109    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:13:01.914171    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:13:06.228847    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/addons-814000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:13:12.157549    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:13:32.638903    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
E1003 20:14:13.602026    1556 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19546-1040/.minikube/profiles/functional-063000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-006000 -v=7 --alsologtostderr: (1m26.97076075s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (87.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-006000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp testdata/cp-test.txt ha-006000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2627363893/001/cp-test_ha-006000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000:/home/docker/cp-test.txt ha-006000-m02:/home/docker/cp-test_ha-006000_ha-006000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m02 "sudo cat /home/docker/cp-test_ha-006000_ha-006000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000:/home/docker/cp-test.txt ha-006000-m03:/home/docker/cp-test_ha-006000_ha-006000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m03 "sudo cat /home/docker/cp-test_ha-006000_ha-006000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000:/home/docker/cp-test.txt ha-006000-m04:/home/docker/cp-test_ha-006000_ha-006000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m04 "sudo cat /home/docker/cp-test_ha-006000_ha-006000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp testdata/cp-test.txt ha-006000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2627363893/001/cp-test_ha-006000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m02:/home/docker/cp-test.txt ha-006000:/home/docker/cp-test_ha-006000-m02_ha-006000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000 "sudo cat /home/docker/cp-test_ha-006000-m02_ha-006000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m02:/home/docker/cp-test.txt ha-006000-m03:/home/docker/cp-test_ha-006000-m02_ha-006000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m03 "sudo cat /home/docker/cp-test_ha-006000-m02_ha-006000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m02:/home/docker/cp-test.txt ha-006000-m04:/home/docker/cp-test_ha-006000-m02_ha-006000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m04 "sudo cat /home/docker/cp-test_ha-006000-m02_ha-006000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp testdata/cp-test.txt ha-006000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2627363893/001/cp-test_ha-006000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m03:/home/docker/cp-test.txt ha-006000:/home/docker/cp-test_ha-006000-m03_ha-006000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000 "sudo cat /home/docker/cp-test_ha-006000-m03_ha-006000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m03:/home/docker/cp-test.txt ha-006000-m02:/home/docker/cp-test_ha-006000-m03_ha-006000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m02 "sudo cat /home/docker/cp-test_ha-006000-m03_ha-006000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m03:/home/docker/cp-test.txt ha-006000-m04:/home/docker/cp-test_ha-006000-m03_ha-006000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m04 "sudo cat /home/docker/cp-test_ha-006000-m03_ha-006000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp testdata/cp-test.txt ha-006000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2627363893/001/cp-test_ha-006000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m04:/home/docker/cp-test.txt ha-006000:/home/docker/cp-test_ha-006000-m04_ha-006000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000 "sudo cat /home/docker/cp-test_ha-006000-m04_ha-006000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m04:/home/docker/cp-test.txt ha-006000-m02:/home/docker/cp-test_ha-006000-m04_ha-006000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m02 "sudo cat /home/docker/cp-test_ha-006000-m04_ha-006000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 cp ha-006000-m04:/home/docker/cp-test.txt ha-006000-m03:/home/docker/cp-test_ha-006000-m04_ha-006000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 ssh -n ha-006000-m03 "sudo cat /home/docker/cp-test_ha-006000-m04_ha-006000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.16s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-297000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-297000 --output=json --user=testUser: (3.724150459s)
--- PASS: TestJSONOutput/stop/Command (3.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-381000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-381000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.774958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4088ca01-d7b3-406e-bb9c-47b88a133bf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-381000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"20424312-e9a4-4efd-9d3b-2eda14c9e292","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19546"}}
	{"specversion":"1.0","id":"86eb738b-7583-436d-b8fa-ae9ba3395015","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig"}}
	{"specversion":"1.0","id":"21a7baf9-d3f9-4594-aefb-d96b0d2e4d64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"72ca3b60-846a-4f48-a5e2-3ff91551ca1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c869ad7-54f2-48a7-9196-83a0a4259181","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube"}}
	{"specversion":"1.0","id":"b0629c9c-3d31-440f-a54e-5d952187b205","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6aacc941-3f7e-4d92-a09d-0ddc25d0f785","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-381000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-381000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-752000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-752000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.735458ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-752000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19546
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19546-1040/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19546-1040/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-752000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-752000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.54975ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-752000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-752000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.684976833s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.733875125s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-752000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-752000: (3.701201375s)
--- PASS: TestNoKubernetes/serial/Stop (3.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-752000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-752000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.464375ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-752000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-752000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-455000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-789000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-789000 --alsologtostderr -v=3: (3.219739s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-789000 -n old-k8s-version-789000: exit status 7 (36.591458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-789000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-431000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-431000 --alsologtostderr -v=3: (4.000752625s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (4.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-431000 -n no-preload-431000: exit status 7 (60.035083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-431000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-291000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-291000 --alsologtostderr -v=3: (3.546098208s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-291000 -n embed-certs-291000: exit status 7 (58.042625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-291000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-329000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-329000 --alsologtostderr -v=3: (2.103780458s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-329000 -n default-k8s-diff-port-329000: exit status 7 (65.104167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-329000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-384000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-384000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-384000 --alsologtostderr -v=3: (2.017838166s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-384000 -n newest-cni-384000: exit status 7 (64.427416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-384000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/275)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:423: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-783000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-783000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-783000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-783000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-783000"

                                                
                                                
----------------------- debugLogs end: cilium-783000 [took: 2.248371s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-783000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-783000
--- SKIP: TestNetworkPlugins/group/cilium (2.36s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-272000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-272000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard