Test Report: QEMU_macOS 19985

                    
                      22dd179cd6f75db6f60fbf5ee015cd1b680b4179:2024-12-04:37341
                    
                

Test fail (98/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.73
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.14
48 TestCertOptions 10.17
49 TestCertExpiration 195.18
50 TestDockerFlags 10.1
51 TestForceSystemdFlag 10.18
52 TestForceSystemdEnv 10.87
97 TestFunctional/parallel/ServiceCmdConnect 33.36
162 TestMultiControlPlane/serial/StartCluster 725.38
163 TestMultiControlPlane/serial/DeployApp 119.97
164 TestMultiControlPlane/serial/PingHostFromPods 0.1
165 TestMultiControlPlane/serial/AddWorkerNode 0.09
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
169 TestMultiControlPlane/serial/StopSecondaryNode 0.12
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
171 TestMultiControlPlane/serial/RestartSecondaryNode 0.16
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 953.86
184 TestJSONOutput/start/Command 725.26
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.09
196 TestJSONOutput/unpause/Command 0.06
216 TestMountStart/serial/StartWithMountFirst 10.04
219 TestMultiNode/serial/FreshStart2Nodes 9.89
220 TestMultiNode/serial/DeployApp2Nodes 87.39
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.08
223 TestMultiNode/serial/MultiNodeLabels 0.07
224 TestMultiNode/serial/ProfileList 0.09
225 TestMultiNode/serial/CopyFile 0.07
226 TestMultiNode/serial/StopNode 0.15
227 TestMultiNode/serial/StartAfterStop 39.21
228 TestMultiNode/serial/RestartKeepsNodes 8.68
229 TestMultiNode/serial/DeleteNode 0.11
230 TestMultiNode/serial/StopMultiNode 3.74
231 TestMultiNode/serial/RestartMultiNode 5.27
232 TestMultiNode/serial/ValidateNameConflict 20.08
236 TestPreload 10.07
238 TestScheduledStopUnix 10.01
239 TestSkaffold 12.71
242 TestRunningBinaryUpgrade 596.95
244 TestKubernetesUpgrade 18.36
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.09
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.02
260 TestStoppedBinaryUpgrade/Upgrade 574.96
262 TestPause/serial/Start 10.08
272 TestNoKubernetes/serial/StartWithK8s 10.01
273 TestNoKubernetes/serial/StartWithStopK8s 5.34
274 TestNoKubernetes/serial/Start 5.32
278 TestNoKubernetes/serial/StartNoArgs 5.32
280 TestNetworkPlugins/group/auto/Start 9.89
281 TestNetworkPlugins/group/calico/Start 9.82
282 TestNetworkPlugins/group/custom-flannel/Start 9.96
283 TestNetworkPlugins/group/false/Start 9.86
284 TestNetworkPlugins/group/kindnet/Start 9.95
285 TestNetworkPlugins/group/flannel/Start 9.94
286 TestNetworkPlugins/group/enable-default-cni/Start 9.98
287 TestNetworkPlugins/group/bridge/Start 9.89
288 TestNetworkPlugins/group/kubenet/Start 9.86
291 TestStartStop/group/old-k8s-version/serial/FirstStart 9.92
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.31
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
300 TestStartStop/group/old-k8s-version/serial/Pause 0.11
302 TestStartStop/group/no-preload/serial/FirstStart 10.02
303 TestStartStop/group/no-preload/serial/DeployApp 0.1
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/embed-certs/serial/FirstStart 10.03
309 TestStartStop/group/no-preload/serial/SecondStart 6.67
310 TestStartStop/group/embed-certs/serial/DeployApp 0.11
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.07
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.09
315 TestStartStop/group/no-preload/serial/Pause 0.12
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.96
320 TestStartStop/group/embed-certs/serial/SecondStart 7.48
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.11
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.07
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.09
326 TestStartStop/group/embed-certs/serial/Pause 0.12
329 TestStartStop/group/newest-cni/serial/FirstStart 10.06
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.6
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
335 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.07
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
340 TestStartStop/group/newest-cni/serial/SecondStart 5.27
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
344 TestStartStop/group/newest-cni/serial/Pause 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (25.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-612000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-612000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (25.73088875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"08c273a9-2eb2-423f-b9e2-c58c6259db43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-612000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d6a37f4-5198-4e65-9b41-ba0b56e506cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19985"}}
	{"specversion":"1.0","id":"08cb0da1-abd2-4017-ae9d-38b2d15bec94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig"}}
	{"specversion":"1.0","id":"c37f2e59-b885-4df5-bb4b-13cb289f4b8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"69f4f8d0-7eb2-49ee-8d1a-3b8106a0f489","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f6c5b69a-9af7-4ef9-bce1-ef3716db9a1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube"}}
	{"specversion":"1.0","id":"7c094f09-cc9b-400b-ac9e-7ced61f500cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"f516a600-9dfc-4160-b55e-05b74c3c1121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"153e2f85-d756-420a-b779-1a98e78dfeb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"65a49388-7131-4fc9-a006-4e484731b542","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"729caf5e-e56a-4c28-9502-644b3f403140","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-612000\" primary control-plane node in \"download-only-612000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"08ade2c3-1d5f-4010-9d5c-57782aa4b957","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cbf90ada-798e-4042-b6a7-ede604b41e1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109db8320 0x109db8320 0x109db8320 0x109db8320 0x109db8320 0x109db8320 0x109db8320] Decompressors:map[bz2:0x14000693030 gz:0x14000693038 tar:0x14000692f80 tar.bz2:0x14000692f90 tar.gz:0x14000692fa0 tar.xz:0x14000692ff0 tar.zst:0x14000693010 tbz2:0x14000692f90 tgz:0x14
000692fa0 txz:0x14000692ff0 tzst:0x14000693010 xz:0x14000693060 zip:0x14000693070 zst:0x14000693068] Getters:map[file:0x140018c4840 http:0x1400091a0a0 https:0x1400091a0f0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"65074d75-db16-4893-bda5-65d6f6c2f6d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 11:51:49.922492    1857 out.go:345] Setting OutFile to fd 1 ...
	I1204 11:51:49.922681    1857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 11:51:49.922685    1857 out.go:358] Setting ErrFile to fd 2...
	I1204 11:51:49.922687    1857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 11:51:49.922820    1857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	W1204 11:51:49.922895    1857 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19985-1334/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19985-1334/.minikube/config/config.json: no such file or directory
	I1204 11:51:49.924311    1857 out.go:352] Setting JSON to true
	I1204 11:51:49.943735    1857 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1280,"bootTime":1733340629,"procs":580,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 11:51:49.943816    1857 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 11:51:49.949248    1857 out.go:97] [download-only-612000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 11:51:49.949396    1857 notify.go:220] Checking for updates...
	W1204 11:51:49.949464    1857 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball: no such file or directory
	I1204 11:51:49.952177    1857 out.go:169] MINIKUBE_LOCATION=19985
	I1204 11:51:49.953796    1857 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 11:51:49.958238    1857 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 11:51:49.962248    1857 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 11:51:49.965237    1857 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	W1204 11:51:49.971233    1857 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 11:51:49.971493    1857 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 11:51:49.975123    1857 out.go:97] Using the qemu2 driver based on user configuration
	I1204 11:51:49.975147    1857 start.go:297] selected driver: qemu2
	I1204 11:51:49.975163    1857 start.go:901] validating driver "qemu2" against <nil>
	I1204 11:51:49.975268    1857 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 11:51:49.979134    1857 out.go:169] Automatically selected the socket_vmnet network
	I1204 11:51:49.984999    1857 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1204 11:51:49.985093    1857 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 11:51:49.985134    1857 cni.go:84] Creating CNI manager for ""
	I1204 11:51:49.985181    1857 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1204 11:51:49.985246    1857 start.go:340] cluster config:
	{Name:download-only-612000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 11:51:49.989904    1857 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 11:51:49.993181    1857 out.go:97] Downloading VM boot image ...
	I1204 11:51:49.993195    1857 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1204 11:51:59.549400    1857 out.go:97] Starting "download-only-612000" primary control-plane node in "download-only-612000" cluster
	I1204 11:51:59.549426    1857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 11:51:59.627372    1857 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 11:51:59.627380    1857 cache.go:56] Caching tarball of preloaded images
	I1204 11:51:59.627650    1857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 11:51:59.631852    1857 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1204 11:51:59.631861    1857 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 11:51:59.729244    1857 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 11:52:14.195249    1857 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 11:52:14.195442    1857 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 11:52:14.912437    1857 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1204 11:52:14.912653    1857 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/download-only-612000/config.json ...
	I1204 11:52:14.912670    1857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/download-only-612000/config.json: {Name:mka66230f231944a3fd443dbe207fab79dc8531f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 11:52:14.912972    1857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 11:52:14.913221    1857 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1204 11:52:15.569128    1857 out.go:193] 
	W1204 11:52:15.575165    1857 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109db8320 0x109db8320 0x109db8320 0x109db8320 0x109db8320 0x109db8320 0x109db8320] Decompressors:map[bz2:0x14000693030 gz:0x14000693038 tar:0x14000692f80 tar.bz2:0x14000692f90 tar.gz:0x14000692fa0 tar.xz:0x14000692ff0 tar.zst:0x14000693010 tbz2:0x14000692f90 tgz:0x14000692fa0 txz:0x14000692ff0 tzst:0x14000693010 xz:0x14000693060 zip:0x14000693070 zst:0x14000693068] Getters:map[file:0x140018c4840 http:0x1400091a0a0 https:0x1400091a0f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1204 11:52:15.575203    1857 out_reason.go:110] 
	W1204 11:52:15.584137    1857 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 11:52:15.589102    1857 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-612000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (25.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-992000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-992000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.971297791s)

                                                
                                                
-- stdout --
	* [offline-docker-992000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-992000" primary control-plane node in "offline-docker-992000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-992000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:50:51.909075    4870 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:50:51.909247    4870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:50:51.909251    4870 out.go:358] Setting ErrFile to fd 2...
	I1204 12:50:51.909253    4870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:50:51.909414    4870 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:50:51.910755    4870 out.go:352] Setting JSON to false
	I1204 12:50:51.930428    4870 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4822,"bootTime":1733340629,"procs":577,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:50:51.930523    4870 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:50:51.934995    4870 out.go:177] * [offline-docker-992000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:50:51.942000    4870 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:50:51.942043    4870 notify.go:220] Checking for updates...
	I1204 12:50:51.949839    4870 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:50:51.951109    4870 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:50:51.954919    4870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:50:51.957904    4870 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:50:51.959104    4870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:50:51.962254    4870 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:50:51.962329    4870 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:50:51.965897    4870 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 12:50:51.970901    4870 start.go:297] selected driver: qemu2
	I1204 12:50:51.970910    4870 start.go:901] validating driver "qemu2" against <nil>
	I1204 12:50:51.970916    4870 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:50:51.973051    4870 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 12:50:51.976917    4870 out.go:177] * Automatically selected the socket_vmnet network
	I1204 12:50:51.978226    4870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 12:50:51.978242    4870 cni.go:84] Creating CNI manager for ""
	I1204 12:50:51.978266    4870 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:50:51.978273    4870 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 12:50:51.978312    4870 start.go:340] cluster config:
	{Name:offline-docker-992000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:50:51.982961    4870 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:50:51.990934    4870 out.go:177] * Starting "offline-docker-992000" primary control-plane node in "offline-docker-992000" cluster
	I1204 12:50:51.994848    4870 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:50:51.994872    4870 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 12:50:51.994880    4870 cache.go:56] Caching tarball of preloaded images
	I1204 12:50:51.994966    4870 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:50:51.994972    4870 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 12:50:51.995037    4870 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/offline-docker-992000/config.json ...
	I1204 12:50:51.995047    4870 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/offline-docker-992000/config.json: {Name:mk4f194fe88294184b46d6c4cb16ebd1d6650819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:50:51.995417    4870 start.go:360] acquireMachinesLock for offline-docker-992000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:50:51.995471    4870 start.go:364] duration metric: took 43.666µs to acquireMachinesLock for "offline-docker-992000"
	I1204 12:50:51.995484    4870 start.go:93] Provisioning new machine with config: &{Name:offline-docker-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:50:51.995522    4870 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:50:51.999927    4870 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 12:50:52.015323    4870 start.go:159] libmachine.API.Create for "offline-docker-992000" (driver="qemu2")
	I1204 12:50:52.015365    4870 client.go:168] LocalClient.Create starting
	I1204 12:50:52.015456    4870 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:50:52.015499    4870 main.go:141] libmachine: Decoding PEM data...
	I1204 12:50:52.015508    4870 main.go:141] libmachine: Parsing certificate...
	I1204 12:50:52.015548    4870 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:50:52.015580    4870 main.go:141] libmachine: Decoding PEM data...
	I1204 12:50:52.015591    4870 main.go:141] libmachine: Parsing certificate...
	I1204 12:50:52.016019    4870 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:50:52.177973    4870 main.go:141] libmachine: Creating SSH key...
	I1204 12:50:52.281614    4870 main.go:141] libmachine: Creating Disk image...
	I1204 12:50:52.281628    4870 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:50:52.282041    4870 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2
	I1204 12:50:52.298095    4870 main.go:141] libmachine: STDOUT: 
	I1204 12:50:52.298129    4870 main.go:141] libmachine: STDERR: 
	I1204 12:50:52.298209    4870 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2 +20000M
	I1204 12:50:52.307665    4870 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:50:52.307687    4870 main.go:141] libmachine: STDERR: 
	I1204 12:50:52.307717    4870 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2
	I1204 12:50:52.307723    4870 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:50:52.307736    4870 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:50:52.307771    4870 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:82:a8:d9:33:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2
	I1204 12:50:52.309854    4870 main.go:141] libmachine: STDOUT: 
	I1204 12:50:52.309867    4870 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:50:52.309892    4870 client.go:171] duration metric: took 294.52375ms to LocalClient.Create
	I1204 12:50:54.311940    4870 start.go:128] duration metric: took 2.3164425s to createHost
	I1204 12:50:54.311952    4870 start.go:83] releasing machines lock for "offline-docker-992000", held for 2.316508625s
	W1204 12:50:54.311971    4870 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:50:54.319103    4870 out.go:177] * Deleting "offline-docker-992000" in qemu2 ...
	W1204 12:50:54.329757    4870 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:50:54.329769    4870 start.go:729] Will try again in 5 seconds ...
	I1204 12:50:59.331903    4870 start.go:360] acquireMachinesLock for offline-docker-992000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:50:59.332313    4870 start.go:364] duration metric: took 339.625µs to acquireMachinesLock for "offline-docker-992000"
	I1204 12:50:59.332423    4870 start.go:93] Provisioning new machine with config: &{Name:offline-docker-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:50:59.332598    4870 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:50:59.343985    4870 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 12:50:59.388853    4870 start.go:159] libmachine.API.Create for "offline-docker-992000" (driver="qemu2")
	I1204 12:50:59.388915    4870 client.go:168] LocalClient.Create starting
	I1204 12:50:59.389057    4870 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:50:59.389143    4870 main.go:141] libmachine: Decoding PEM data...
	I1204 12:50:59.389163    4870 main.go:141] libmachine: Parsing certificate...
	I1204 12:50:59.389244    4870 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:50:59.389301    4870 main.go:141] libmachine: Decoding PEM data...
	I1204 12:50:59.389316    4870 main.go:141] libmachine: Parsing certificate...
	I1204 12:50:59.390011    4870 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:50:59.592578    4870 main.go:141] libmachine: Creating SSH key...
	I1204 12:50:59.773742    4870 main.go:141] libmachine: Creating Disk image...
	I1204 12:50:59.773753    4870 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:50:59.774009    4870 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2
	I1204 12:50:59.784230    4870 main.go:141] libmachine: STDOUT: 
	I1204 12:50:59.784250    4870 main.go:141] libmachine: STDERR: 
	I1204 12:50:59.784326    4870 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2 +20000M
	I1204 12:50:59.792784    4870 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:50:59.792799    4870 main.go:141] libmachine: STDERR: 
	I1204 12:50:59.792809    4870 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2
	I1204 12:50:59.792814    4870 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:50:59.792830    4870 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:50:59.792862    4870 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:b8:f0:10:85:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/offline-docker-992000/disk.qcow2
	I1204 12:50:59.794612    4870 main.go:141] libmachine: STDOUT: 
	I1204 12:50:59.794627    4870 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:50:59.794639    4870 client.go:171] duration metric: took 405.72475ms to LocalClient.Create
	I1204 12:51:01.796839    4870 start.go:128] duration metric: took 2.464222417s to createHost
	I1204 12:51:01.796921    4870 start.go:83] releasing machines lock for "offline-docker-992000", held for 2.464618542s
	W1204 12:51:01.797417    4870 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-992000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-992000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:51:01.812062    4870 out.go:201] 
	W1204 12:51:01.815198    4870 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:51:01.815222    4870 out.go:270] * 
	* 
	W1204 12:51:01.817567    4870 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:51:01.832161    4870 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-992000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-12-04 12:51:01.848678 -0800 PST m=+3552.143690501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-992000 -n offline-docker-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-992000 -n offline-docker-992000: exit status 7 (71.128417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-992000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-992000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-992000
--- FAIL: TestOffline (10.14s)

                                                
                                    
x
+
TestCertOptions (10.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-655000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-655000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.888802416s)

                                                
                                                
-- stdout --
	* [cert-options-655000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-655000" primary control-plane node in "cert-options-655000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-655000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-655000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-655000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-655000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-655000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (85.955042ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-655000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-655000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-655000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-655000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-655000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-655000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.936791ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-655000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-655000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-655000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-655000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-655000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-12-04 12:51:33.028828 -0800 PST m=+3583.324265584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-655000 -n cert-options-655000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-655000 -n cert-options-655000: exit status 7 (34.498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-655000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-655000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-655000
--- FAIL: TestCertOptions (10.17s)

                                                
                                    
x
+
TestCertExpiration (195.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-420000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-420000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.804214s)

                                                
                                                
-- stdout --
	* [cert-expiration-420000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-420000" primary control-plane node in "cert-expiration-420000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-420000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-420000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-420000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-420000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-420000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.220678583s)

                                                
                                                
-- stdout --
	* [cert-expiration-420000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-420000" primary control-plane node in "cert-expiration-420000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-420000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-420000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-420000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-420000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-420000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-420000" primary control-plane node in "cert-expiration-420000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-420000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-420000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-420000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-12-04 12:54:33.116589 -0800 PST m=+3763.324191084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-420000 -n cert-expiration-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-420000 -n cert-expiration-420000: exit status 7 (67.412583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-420000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-420000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-420000
--- FAIL: TestCertExpiration (195.18s)

                                                
                                    
x
+
TestDockerFlags (10.1s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-227000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-227000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.848343667s)

                                                
                                                
-- stdout --
	* [docker-flags-227000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-227000" primary control-plane node in "docker-flags-227000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-227000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:51:12.911136    5067 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:51:12.911286    5067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:51:12.911289    5067 out.go:358] Setting ErrFile to fd 2...
	I1204 12:51:12.911291    5067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:51:12.911432    5067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:51:12.912613    5067 out.go:352] Setting JSON to false
	I1204 12:51:12.930621    5067 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4843,"bootTime":1733340629,"procs":578,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:51:12.930698    5067 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:51:12.937479    5067 out.go:177] * [docker-flags-227000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:51:12.946437    5067 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:51:12.946486    5067 notify.go:220] Checking for updates...
	I1204 12:51:12.954352    5067 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:51:12.957436    5067 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:51:12.960397    5067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:51:12.963373    5067 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:51:12.966397    5067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:51:12.969695    5067 config.go:182] Loaded profile config "force-systemd-flag-883000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:51:12.969771    5067 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:51:12.969815    5067 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:51:12.974348    5067 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 12:51:12.980369    5067 start.go:297] selected driver: qemu2
	I1204 12:51:12.980376    5067 start.go:901] validating driver "qemu2" against <nil>
	I1204 12:51:12.980383    5067 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:51:12.982984    5067 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 12:51:12.986407    5067 out.go:177] * Automatically selected the socket_vmnet network
	I1204 12:51:12.990466    5067 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1204 12:51:12.990490    5067 cni.go:84] Creating CNI manager for ""
	I1204 12:51:12.990515    5067 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:51:12.990521    5067 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 12:51:12.990558    5067 start.go:340] cluster config:
	{Name:docker-flags-227000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:51:12.995601    5067 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:51:13.004378    5067 out.go:177] * Starting "docker-flags-227000" primary control-plane node in "docker-flags-227000" cluster
	I1204 12:51:13.008212    5067 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:51:13.008229    5067 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 12:51:13.008240    5067 cache.go:56] Caching tarball of preloaded images
	I1204 12:51:13.008329    5067 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:51:13.008335    5067 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 12:51:13.008402    5067 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/docker-flags-227000/config.json ...
	I1204 12:51:13.008413    5067 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/docker-flags-227000/config.json: {Name:mk061cc463c11ec7e2de068121457c6cbbd77d2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:51:13.008815    5067 start.go:360] acquireMachinesLock for docker-flags-227000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:51:13.008866    5067 start.go:364] duration metric: took 42.125µs to acquireMachinesLock for "docker-flags-227000"
	I1204 12:51:13.008879    5067 start.go:93] Provisioning new machine with config: &{Name:docker-flags-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:51:13.008908    5067 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:51:13.017242    5067 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 12:51:13.034552    5067 start.go:159] libmachine.API.Create for "docker-flags-227000" (driver="qemu2")
	I1204 12:51:13.034585    5067 client.go:168] LocalClient.Create starting
	I1204 12:51:13.034657    5067 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:51:13.034694    5067 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:13.034708    5067 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:13.034744    5067 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:51:13.034774    5067 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:13.034780    5067 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:13.035282    5067 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:51:13.196032    5067 main.go:141] libmachine: Creating SSH key...
	I1204 12:51:13.297398    5067 main.go:141] libmachine: Creating Disk image...
	I1204 12:51:13.297404    5067 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:51:13.297632    5067 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2
	I1204 12:51:13.307458    5067 main.go:141] libmachine: STDOUT: 
	I1204 12:51:13.307480    5067 main.go:141] libmachine: STDERR: 
	I1204 12:51:13.307540    5067 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2 +20000M
	I1204 12:51:13.315994    5067 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:51:13.316009    5067 main.go:141] libmachine: STDERR: 
	I1204 12:51:13.316028    5067 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2
	I1204 12:51:13.316036    5067 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:51:13.316049    5067 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:51:13.316078    5067 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:df:98:3f:70:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2
	I1204 12:51:13.317876    5067 main.go:141] libmachine: STDOUT: 
	I1204 12:51:13.317890    5067 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:51:13.317909    5067 client.go:171] duration metric: took 283.321083ms to LocalClient.Create
	I1204 12:51:15.320069    5067 start.go:128] duration metric: took 2.311165917s to createHost
	I1204 12:51:15.320156    5067 start.go:83] releasing machines lock for "docker-flags-227000", held for 2.311294833s
	W1204 12:51:15.320225    5067 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:51:15.348648    5067 out.go:177] * Deleting "docker-flags-227000" in qemu2 ...
	W1204 12:51:15.371859    5067 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:51:15.371889    5067 start.go:729] Will try again in 5 seconds ...
	I1204 12:51:20.373941    5067 start.go:360] acquireMachinesLock for docker-flags-227000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:51:20.374267    5067 start.go:364] duration metric: took 257.417µs to acquireMachinesLock for "docker-flags-227000"
	I1204 12:51:20.374330    5067 start.go:93] Provisioning new machine with config: &{Name:docker-flags-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:51:20.374620    5067 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:51:20.387394    5067 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 12:51:20.427952    5067 start.go:159] libmachine.API.Create for "docker-flags-227000" (driver="qemu2")
	I1204 12:51:20.428013    5067 client.go:168] LocalClient.Create starting
	I1204 12:51:20.428175    5067 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:51:20.428256    5067 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:20.428272    5067 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:20.428339    5067 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:51:20.428400    5067 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:20.428415    5067 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:20.429210    5067 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:51:20.599765    5067 main.go:141] libmachine: Creating SSH key...
	I1204 12:51:20.648720    5067 main.go:141] libmachine: Creating Disk image...
	I1204 12:51:20.648727    5067 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:51:20.648921    5067 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2
	I1204 12:51:20.658783    5067 main.go:141] libmachine: STDOUT: 
	I1204 12:51:20.658803    5067 main.go:141] libmachine: STDERR: 
	I1204 12:51:20.658852    5067 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2 +20000M
	I1204 12:51:20.667327    5067 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:51:20.667349    5067 main.go:141] libmachine: STDERR: 
	I1204 12:51:20.667360    5067 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2
	I1204 12:51:20.667366    5067 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:51:20.667379    5067 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:51:20.667417    5067 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:f6:cf:4d:16:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/docker-flags-227000/disk.qcow2
	I1204 12:51:20.669211    5067 main.go:141] libmachine: STDOUT: 
	I1204 12:51:20.669225    5067 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:51:20.669236    5067 client.go:171] duration metric: took 241.21825ms to LocalClient.Create
	I1204 12:51:22.671427    5067 start.go:128] duration metric: took 2.296804792s to createHost
	I1204 12:51:22.671510    5067 start.go:83] releasing machines lock for "docker-flags-227000", held for 2.297249875s
	W1204 12:51:22.671900    5067 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-227000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-227000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:51:22.683338    5067 out.go:201] 
	W1204 12:51:22.695954    5067 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:51:22.696027    5067 out.go:270] * 
	* 
	W1204 12:51:22.698537    5067 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:51:22.712685    5067 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-227000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-227000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-227000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (86.9515ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-227000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-227000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-227000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-227000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-227000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-227000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-227000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-227000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-227000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (48.738917ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-227000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-227000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-227000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-227000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-227000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-227000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-12-04 12:51:22.865615 -0800 PST m=+3573.160913834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-227000 -n docker-flags-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-227000 -n docker-flags-227000: exit status 7 (33.97025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-227000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-227000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-227000
--- FAIL: TestDockerFlags (10.10s)

                                                
                                    
x
+
TestForceSystemdFlag (10.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-883000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-883000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.981295833s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-883000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-883000" primary control-plane node in "force-systemd-flag-883000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-883000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:51:07.846827    5046 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:51:07.846965    5046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:51:07.846969    5046 out.go:358] Setting ErrFile to fd 2...
	I1204 12:51:07.846971    5046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:51:07.847104    5046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:51:07.848289    5046 out.go:352] Setting JSON to false
	I1204 12:51:07.866140    5046 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4838,"bootTime":1733340629,"procs":581,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:51:07.866221    5046 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:51:07.873241    5046 out.go:177] * [force-systemd-flag-883000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:51:07.889243    5046 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:51:07.889287    5046 notify.go:220] Checking for updates...
	I1204 12:51:07.900163    5046 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:51:07.904190    5046 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:51:07.907167    5046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:51:07.910167    5046 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:51:07.913223    5046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:51:07.916534    5046 config.go:182] Loaded profile config "force-systemd-env-825000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:51:07.916617    5046 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:51:07.916668    5046 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:51:07.921165    5046 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 12:51:07.928085    5046 start.go:297] selected driver: qemu2
	I1204 12:51:07.928091    5046 start.go:901] validating driver "qemu2" against <nil>
	I1204 12:51:07.928096    5046 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:51:07.930796    5046 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 12:51:07.934165    5046 out.go:177] * Automatically selected the socket_vmnet network
	I1204 12:51:07.937307    5046 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 12:51:07.937322    5046 cni.go:84] Creating CNI manager for ""
	I1204 12:51:07.937351    5046 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:51:07.937355    5046 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 12:51:07.937393    5046 start.go:340] cluster config:
	{Name:force-systemd-flag-883000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-883000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:51:07.942347    5046 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:51:07.951199    5046 out.go:177] * Starting "force-systemd-flag-883000" primary control-plane node in "force-systemd-flag-883000" cluster
	I1204 12:51:07.954151    5046 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:51:07.954170    5046 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 12:51:07.954176    5046 cache.go:56] Caching tarball of preloaded images
	I1204 12:51:07.954250    5046 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:51:07.954256    5046 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 12:51:07.954321    5046 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/force-systemd-flag-883000/config.json ...
	I1204 12:51:07.954333    5046 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/force-systemd-flag-883000/config.json: {Name:mk6b3e67cb697307d721e9d8514b57fc5d8e6034 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:51:07.954883    5046 start.go:360] acquireMachinesLock for force-systemd-flag-883000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:51:07.954938    5046 start.go:364] duration metric: took 47µs to acquireMachinesLock for "force-systemd-flag-883000"
	I1204 12:51:07.954952    5046 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-883000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-883000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:51:07.954978    5046 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:51:07.961141    5046 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 12:51:07.978844    5046 start.go:159] libmachine.API.Create for "force-systemd-flag-883000" (driver="qemu2")
	I1204 12:51:07.978873    5046 client.go:168] LocalClient.Create starting
	I1204 12:51:07.978949    5046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:51:07.978988    5046 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:07.978998    5046 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:07.979037    5046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:51:07.979067    5046 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:07.979074    5046 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:07.979592    5046 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:51:08.141304    5046 main.go:141] libmachine: Creating SSH key...
	I1204 12:51:08.204032    5046 main.go:141] libmachine: Creating Disk image...
	I1204 12:51:08.204038    5046 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:51:08.204246    5046 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2
	I1204 12:51:08.214026    5046 main.go:141] libmachine: STDOUT: 
	I1204 12:51:08.214047    5046 main.go:141] libmachine: STDERR: 
	I1204 12:51:08.214109    5046 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2 +20000M
	I1204 12:51:08.222612    5046 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:51:08.222628    5046 main.go:141] libmachine: STDERR: 
	I1204 12:51:08.222643    5046 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2
	I1204 12:51:08.222649    5046 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:51:08.222664    5046 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:51:08.222692    5046 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:4c:99:97:eb:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2
	I1204 12:51:08.224470    5046 main.go:141] libmachine: STDOUT: 
	I1204 12:51:08.224483    5046 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:51:08.224505    5046 client.go:171] duration metric: took 245.628959ms to LocalClient.Create
	I1204 12:51:10.226644    5046 start.go:128] duration metric: took 2.271681208s to createHost
	I1204 12:51:10.226697    5046 start.go:83] releasing machines lock for "force-systemd-flag-883000", held for 2.271780917s
	W1204 12:51:10.226819    5046 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:51:10.246926    5046 out.go:177] * Deleting "force-systemd-flag-883000" in qemu2 ...
	W1204 12:51:10.275589    5046 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:51:10.275604    5046 start.go:729] Will try again in 5 seconds ...
	I1204 12:51:15.277840    5046 start.go:360] acquireMachinesLock for force-systemd-flag-883000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:51:15.320272    5046 start.go:364] duration metric: took 42.30825ms to acquireMachinesLock for "force-systemd-flag-883000"
	I1204 12:51:15.320437    5046 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-883000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-883000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:51:15.320703    5046 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:51:15.336623    5046 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 12:51:15.383905    5046 start.go:159] libmachine.API.Create for "force-systemd-flag-883000" (driver="qemu2")
	I1204 12:51:15.383972    5046 client.go:168] LocalClient.Create starting
	I1204 12:51:15.384130    5046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:51:15.384222    5046 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:15.384240    5046 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:15.384300    5046 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:51:15.384370    5046 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:15.384389    5046 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:15.385006    5046 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:51:15.557465    5046 main.go:141] libmachine: Creating SSH key...
	I1204 12:51:15.720151    5046 main.go:141] libmachine: Creating Disk image...
	I1204 12:51:15.720162    5046 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:51:15.720390    5046 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2
	I1204 12:51:15.730826    5046 main.go:141] libmachine: STDOUT: 
	I1204 12:51:15.730850    5046 main.go:141] libmachine: STDERR: 
	I1204 12:51:15.730918    5046 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2 +20000M
	I1204 12:51:15.739401    5046 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:51:15.739416    5046 main.go:141] libmachine: STDERR: 
	I1204 12:51:15.739429    5046 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2
	I1204 12:51:15.739440    5046 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:51:15.739449    5046 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:51:15.739485    5046 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:77:c3:b8:4d:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-flag-883000/disk.qcow2
	I1204 12:51:15.741311    5046 main.go:141] libmachine: STDOUT: 
	I1204 12:51:15.741325    5046 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:51:15.741355    5046 client.go:171] duration metric: took 357.375959ms to LocalClient.Create
	I1204 12:51:17.743511    5046 start.go:128] duration metric: took 2.422796833s to createHost
	I1204 12:51:17.743554    5046 start.go:83] releasing machines lock for "force-systemd-flag-883000", held for 2.423267833s
	W1204 12:51:17.743858    5046 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-883000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-883000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:51:17.758515    5046 out.go:201] 
	W1204 12:51:17.770764    5046 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:51:17.770793    5046 out.go:270] * 
	* 
	W1204 12:51:17.773443    5046 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:51:17.783441    5046 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-883000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-883000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-883000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (85.157208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-883000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-883000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-883000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-12-04 12:51:17.884477 -0800 PST m=+3568.179708209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-883000 -n force-systemd-flag-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-883000 -n force-systemd-flag-883000: exit status 7 (33.872416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-883000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-883000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-883000
--- FAIL: TestForceSystemdFlag (10.18s)

                                                
                                    
x
+
TestForceSystemdEnv (10.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-825000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1204 12:51:02.456733    1856 install.go:79] stdout: 
W1204 12:51:02.456850    1856 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1204 12:51:02.456868    1856 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/001/docker-machine-driver-hyperkit]
I1204 12:51:02.469267    1856 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/001/docker-machine-driver-hyperkit]
I1204 12:51:02.480291    1856 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/001/docker-machine-driver-hyperkit]
I1204 12:51:02.491344    1856 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/001/docker-machine-driver-hyperkit]
I1204 12:51:02.513495    1856 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1204 12:51:02.513624    1856 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1204 12:51:04.320976    1856 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1204 12:51:04.320997    1856 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1204 12:51:04.321037    1856 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1204 12:51:04.321065    1856 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/002/docker-machine-driver-hyperkit
I1204 12:51:04.707153    1856 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1050316e0 0x1050316e0 0x1050316e0 0x1050316e0 0x1050316e0 0x1050316e0 0x1050316e0] Decompressors:map[bz2:0x140006815f0 gz:0x140006815f8 tar:0x14000681590 tar.bz2:0x140006815b0 tar.gz:0x140006815c0 tar.xz:0x140006815d0 tar.zst:0x140006815e0 tbz2:0x140006815b0 tgz:0x140006815c0 txz:0x140006815d0 tzst:0x140006815e0 xz:0x14000681600 zip:0x14000681610 zst:0x14000681608] Getters:map[file:0x140007e5040 http:0x14000578960 https:0x140005789b0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1204 12:51:04.707227    1856 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/002/docker-machine-driver-hyperkit
I1204 12:51:07.762681    1856 install.go:79] stdout: 
W1204 12:51:07.762849    1856 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1204 12:51:07.762878    1856 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/002/docker-machine-driver-hyperkit]
I1204 12:51:07.779503    1856 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/002/docker-machine-driver-hyperkit]
I1204 12:51:07.792470    1856 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/002/docker-machine-driver-hyperkit]
I1204 12:51:07.803134    1856 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-825000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.659824583s)

                                                
                                                
-- stdout --
	* [force-systemd-env-825000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-825000" primary control-plane node in "force-systemd-env-825000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-825000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:51:02.043424    5009 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:51:02.043577    5009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:51:02.043581    5009 out.go:358] Setting ErrFile to fd 2...
	I1204 12:51:02.043583    5009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:51:02.043692    5009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:51:02.044915    5009 out.go:352] Setting JSON to false
	I1204 12:51:02.063236    5009 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4833,"bootTime":1733340629,"procs":581,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:51:02.063321    5009 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:51:02.069064    5009 out.go:177] * [force-systemd-env-825000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:51:02.076358    5009 notify.go:220] Checking for updates...
	I1204 12:51:02.082933    5009 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:51:02.090798    5009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:51:02.098951    5009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:51:02.105892    5009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:51:02.112904    5009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:51:02.120960    5009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1204 12:51:02.125226    5009 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:51:02.125278    5009 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:51:02.128946    5009 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 12:51:02.135943    5009 start.go:297] selected driver: qemu2
	I1204 12:51:02.135949    5009 start.go:901] validating driver "qemu2" against <nil>
	I1204 12:51:02.135954    5009 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:51:02.138503    5009 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 12:51:02.141928    5009 out.go:177] * Automatically selected the socket_vmnet network
	I1204 12:51:02.146132    5009 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 12:51:02.146147    5009 cni.go:84] Creating CNI manager for ""
	I1204 12:51:02.146176    5009 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:51:02.146181    5009 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 12:51:02.146219    5009 start.go:340] cluster config:
	{Name:force-systemd-env-825000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-825000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:51:02.150732    5009 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:51:02.158956    5009 out.go:177] * Starting "force-systemd-env-825000" primary control-plane node in "force-systemd-env-825000" cluster
	I1204 12:51:02.162888    5009 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:51:02.162914    5009 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 12:51:02.162924    5009 cache.go:56] Caching tarball of preloaded images
	I1204 12:51:02.163037    5009 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:51:02.163044    5009 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 12:51:02.163110    5009 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/force-systemd-env-825000/config.json ...
	I1204 12:51:02.163124    5009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/force-systemd-env-825000/config.json: {Name:mk7b9b1386c81298e345f88b28dadf63e39aa73a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:51:02.163384    5009 start.go:360] acquireMachinesLock for force-systemd-env-825000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:51:02.163435    5009 start.go:364] duration metric: took 42.458µs to acquireMachinesLock for "force-systemd-env-825000"
	I1204 12:51:02.163447    5009 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-825000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-825000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:51:02.163476    5009 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:51:02.170895    5009 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 12:51:02.186890    5009 start.go:159] libmachine.API.Create for "force-systemd-env-825000" (driver="qemu2")
	I1204 12:51:02.186922    5009 client.go:168] LocalClient.Create starting
	I1204 12:51:02.186988    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:51:02.187025    5009 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:02.187041    5009 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:02.187085    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:51:02.187115    5009 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:02.187121    5009 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:02.187483    5009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:51:02.356859    5009 main.go:141] libmachine: Creating SSH key...
	I1204 12:51:02.424098    5009 main.go:141] libmachine: Creating Disk image...
	I1204 12:51:02.424106    5009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:51:02.424458    5009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2
	I1204 12:51:02.434826    5009 main.go:141] libmachine: STDOUT: 
	I1204 12:51:02.434848    5009 main.go:141] libmachine: STDERR: 
	I1204 12:51:02.434913    5009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2 +20000M
	I1204 12:51:02.444556    5009 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:51:02.444575    5009 main.go:141] libmachine: STDERR: 
	I1204 12:51:02.444607    5009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2
	I1204 12:51:02.444612    5009 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:51:02.444626    5009 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:51:02.444657    5009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:57:24:a6:0e:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2
	I1204 12:51:02.446848    5009 main.go:141] libmachine: STDOUT: 
	I1204 12:51:02.446862    5009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:51:02.446888    5009 client.go:171] duration metric: took 259.961667ms to LocalClient.Create
	I1204 12:51:04.448955    5009 start.go:128] duration metric: took 2.285500333s to createHost
	I1204 12:51:04.448984    5009 start.go:83] releasing machines lock for "force-systemd-env-825000", held for 2.285574333s
	W1204 12:51:04.449006    5009 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:51:04.458042    5009 out.go:177] * Deleting "force-systemd-env-825000" in qemu2 ...
	W1204 12:51:04.483667    5009 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:51:04.483678    5009 start.go:729] Will try again in 5 seconds ...
	I1204 12:51:09.485857    5009 start.go:360] acquireMachinesLock for force-systemd-env-825000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:51:10.226810    5009 start.go:364] duration metric: took 740.824ms to acquireMachinesLock for "force-systemd-env-825000"
	I1204 12:51:10.227011    5009 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-825000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-825000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:51:10.227293    5009 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:51:10.237968    5009 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 12:51:10.285161    5009 start.go:159] libmachine.API.Create for "force-systemd-env-825000" (driver="qemu2")
	I1204 12:51:10.285212    5009 client.go:168] LocalClient.Create starting
	I1204 12:51:10.285348    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:51:10.285457    5009 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:10.285471    5009 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:10.285539    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:51:10.285597    5009 main.go:141] libmachine: Decoding PEM data...
	I1204 12:51:10.285609    5009 main.go:141] libmachine: Parsing certificate...
	I1204 12:51:10.286182    5009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:51:10.519159    5009 main.go:141] libmachine: Creating SSH key...
	I1204 12:51:10.589291    5009 main.go:141] libmachine: Creating Disk image...
	I1204 12:51:10.589297    5009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:51:10.589489    5009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2
	I1204 12:51:10.599522    5009 main.go:141] libmachine: STDOUT: 
	I1204 12:51:10.599556    5009 main.go:141] libmachine: STDERR: 
	I1204 12:51:10.599630    5009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2 +20000M
	I1204 12:51:10.608185    5009 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:51:10.608200    5009 main.go:141] libmachine: STDERR: 
	I1204 12:51:10.608220    5009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2
	I1204 12:51:10.608229    5009 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:51:10.608239    5009 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:51:10.608266    5009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:ad:92:98:85:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/force-systemd-env-825000/disk.qcow2
	I1204 12:51:10.610108    5009 main.go:141] libmachine: STDOUT: 
	I1204 12:51:10.610122    5009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:51:10.610136    5009 client.go:171] duration metric: took 324.922209ms to LocalClient.Create
	I1204 12:51:12.612418    5009 start.go:128] duration metric: took 2.385084958s to createHost
	I1204 12:51:12.612522    5009 start.go:83] releasing machines lock for "force-systemd-env-825000", held for 2.385692209s
	W1204 12:51:12.612861    5009 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-825000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-825000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:51:12.631621    5009 out.go:201] 
	W1204 12:51:12.641502    5009 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:51:12.641529    5009 out.go:270] * 
	* 
	W1204 12:51:12.644094    5009 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:51:12.655372    5009 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-825000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-825000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-825000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (85.727209ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-825000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-825000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-825000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-12-04 12:51:12.758487 -0800 PST m=+3563.053648501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-825000 -n force-systemd-env-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-825000 -n force-systemd-env-825000: exit status 7 (38.849083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-825000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-825000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-825000
--- FAIL: TestForceSystemdEnv (10.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (33.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-306000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-306000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-f2pbl" [376abad6-fec2-4639-8572-9a5341918b6a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-f2pbl" [376abad6-fec2-4639-8572-9a5341918b6a] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.007651875s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31149
functional_test.go:1661: error fetching http://192.168.105.4:31149: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
I1204 12:02:45.288959    1856 retry.go:31] will retry after 659.640226ms: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31149: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
I1204 12:02:45.952202    1856 retry.go:31] will retry after 799.820604ms: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31149: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
I1204 12:02:46.755853    1856 retry.go:31] will retry after 2.617876559s: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31149: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
I1204 12:02:49.377685    1856 retry.go:31] will retry after 2.09311531s: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31149: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
I1204 12:02:51.474996    1856 retry.go:31] will retry after 3.260728675s: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31149: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
I1204 12:02:54.738246    1856 retry.go:31] will retry after 7.586912949s: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31149: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31149: Get "http://192.168.105.4:31149": dial tcp 192.168.105.4:31149: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-306000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-f2pbl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-306000/192.168.105.4
Start Time:       Wed, 04 Dec 2024 12:02:30 -0800
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://f841eafd7583292283577759a807b7186cf01a033b947505a1f032a7d35e1cbb
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 04 Dec 2024 12:02:54 -0800
Finished:     Wed, 04 Dec 2024 12:02:54 -0800
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 04 Dec 2024 12:02:38 -0800
Finished:     Wed, 04 Dec 2024 12:02:38 -0800
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dmkjn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dmkjn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  32s               default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-f2pbl to functional-306000
Normal   Pulling    32s               kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     25s               kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.377s (7.123s including waiting). Image size: 84957542 bytes.
Normal   Created    9s (x3 over 25s)  kubelet            Created container echoserver-arm
Normal   Pulled     9s (x2 over 24s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Started    8s (x3 over 25s)  kubelet            Started container echoserver-arm
Warning  BackOff    8s (x3 over 23s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-f2pbl_default(376abad6-fec2-4639-8572-9a5341918b6a)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-306000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-306000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.72.16
IPs:                      10.103.72.16
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31149/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-306000 -n functional-306000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service | functional-306000 service                                                                                            | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:02 PST | 04 Dec 24 12:02 PST |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| service | functional-306000 service list                                                                                       | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:02 PST | 04 Dec 24 12:02 PST |
	| service | functional-306000 service list                                                                                       | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:02 PST | 04 Dec 24 12:02 PST |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-306000 service                                                                                            | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:02 PST | 04 Dec 24 12:02 PST |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-306000                                                                                                    | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:02 PST | 04 Dec 24 12:02 PST |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-306000 service                                                                                            | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:02 PST | 04 Dec 24 12:02 PST |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| mount   | -p functional-306000                                                                                                 | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:02 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3830650269/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh findmnt                                                                                        | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:02 PST |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh findmnt                                                                                        | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:02 PST | 04 Dec 24 12:02 PST |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh -- ls                                                                                          | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:02 PST | 04 Dec 24 12:02 PST |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh cat                                                                                            | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:02 PST | 04 Dec 24 12:02 PST |
	|         | /mount-9p/test-1733342575178937000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh stat                                                                                           | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST | 04 Dec 24 12:03 PST |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh stat                                                                                           | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST | 04 Dec 24 12:03 PST |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh sudo                                                                                           | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST | 04 Dec 24 12:03 PST |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh findmnt                                                                                        | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-306000                                                                                                 | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1497503367/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh findmnt                                                                                        | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST | 04 Dec 24 12:03 PST |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh -- ls                                                                                          | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST | 04 Dec 24 12:03 PST |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh sudo                                                                                           | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-306000                                                                                                 | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3650779954/001:/mount1   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-306000                                                                                                 | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3650779954/001:/mount2   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-306000                                                                                                 | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3650779954/001:/mount3   |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh findmnt                                                                                        | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh findmnt                                                                                        | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST | 04 Dec 24 12:03 PST |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-306000 ssh findmnt                                                                                        | functional-306000 | jenkins | v1.34.0 | 04 Dec 24 12:03 PST |                     |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 12:01:35
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 12:01:35.655234    2617 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:01:35.655403    2617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:01:35.655404    2617 out.go:358] Setting ErrFile to fd 2...
	I1204 12:01:35.655406    2617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:01:35.655539    2617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:01:35.656644    2617 out.go:352] Setting JSON to false
	I1204 12:01:35.674718    2617 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1866,"bootTime":1733340629,"procs":567,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:01:35.674800    2617 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:01:35.680215    2617 out.go:177] * [functional-306000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:01:35.689228    2617 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:01:35.689292    2617 notify.go:220] Checking for updates...
	I1204 12:01:35.697093    2617 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:01:35.701085    2617 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:01:35.704247    2617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:01:35.707127    2617 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:01:35.710151    2617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:01:35.713565    2617 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:01:35.713626    2617 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:01:35.718140    2617 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 12:01:35.725180    2617 start.go:297] selected driver: qemu2
	I1204 12:01:35.725183    2617 start.go:901] validating driver "qemu2" against &{Name:functional-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:01:35.725235    2617 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:01:35.727774    2617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 12:01:35.727795    2617 cni.go:84] Creating CNI manager for ""
	I1204 12:01:35.727823    2617 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:01:35.727870    2617 start.go:340] cluster config:
	{Name:functional-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-306000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:01:35.732315    2617 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:01:35.740217    2617 out.go:177] * Starting "functional-306000" primary control-plane node in "functional-306000" cluster
	I1204 12:01:35.743964    2617 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:01:35.743976    2617 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 12:01:35.743983    2617 cache.go:56] Caching tarball of preloaded images
	I1204 12:01:35.744046    2617 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:01:35.744050    2617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 12:01:35.744096    2617 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/config.json ...
	I1204 12:01:35.744592    2617 start.go:360] acquireMachinesLock for functional-306000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:01:35.744639    2617 start.go:364] duration metric: took 42.542µs to acquireMachinesLock for "functional-306000"
	I1204 12:01:35.744646    2617 start.go:96] Skipping create...Using existing machine configuration
	I1204 12:01:35.744649    2617 fix.go:54] fixHost starting: 
	I1204 12:01:35.745239    2617 fix.go:112] recreateIfNeeded on functional-306000: state=Running err=<nil>
	W1204 12:01:35.745245    2617 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 12:01:35.753208    2617 out.go:177] * Updating the running qemu2 "functional-306000" VM ...
	I1204 12:01:35.757149    2617 machine.go:93] provisionDockerMachine start ...
	I1204 12:01:35.757201    2617 main.go:141] libmachine: Using SSH client type: native
	I1204 12:01:35.757337    2617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100716fc0] 0x100719800 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1204 12:01:35.757340    2617 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 12:01:35.804839    2617 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-306000
	
	I1204 12:01:35.804849    2617 buildroot.go:166] provisioning hostname "functional-306000"
	I1204 12:01:35.804899    2617 main.go:141] libmachine: Using SSH client type: native
	I1204 12:01:35.805015    2617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100716fc0] 0x100719800 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1204 12:01:35.805019    2617 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-306000 && echo "functional-306000" | sudo tee /etc/hostname
	I1204 12:01:35.855748    2617 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-306000
	
	I1204 12:01:35.855808    2617 main.go:141] libmachine: Using SSH client type: native
	I1204 12:01:35.855919    2617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100716fc0] 0x100719800 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1204 12:01:35.855925    2617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-306000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-306000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-306000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 12:01:35.902111    2617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 12:01:35.902120    2617 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19985-1334/.minikube CaCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19985-1334/.minikube}
	I1204 12:01:35.902126    2617 buildroot.go:174] setting up certificates
	I1204 12:01:35.902130    2617 provision.go:84] configureAuth start
	I1204 12:01:35.902137    2617 provision.go:143] copyHostCerts
	I1204 12:01:35.902210    2617 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem, removing ...
	I1204 12:01:35.902214    2617 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem
	I1204 12:01:35.902506    2617 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem (1123 bytes)
	I1204 12:01:35.902703    2617 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem, removing ...
	I1204 12:01:35.902706    2617 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem
	I1204 12:01:35.902767    2617 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem (1679 bytes)
	I1204 12:01:35.902902    2617 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem, removing ...
	I1204 12:01:35.902904    2617 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem
	I1204 12:01:35.902963    2617 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem (1082 bytes)
	I1204 12:01:35.903061    2617 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem org=jenkins.functional-306000 san=[127.0.0.1 192.168.105.4 functional-306000 localhost minikube]
	I1204 12:01:36.014799    2617 provision.go:177] copyRemoteCerts
	I1204 12:01:36.014848    2617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 12:01:36.014855    2617 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/functional-306000/id_rsa Username:docker}
	I1204 12:01:36.040772    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 12:01:36.049232    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1204 12:01:36.058580    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 12:01:36.066713    2617 provision.go:87] duration metric: took 164.572417ms to configureAuth
	I1204 12:01:36.066722    2617 buildroot.go:189] setting minikube options for container-runtime
	I1204 12:01:36.066850    2617 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:01:36.066895    2617 main.go:141] libmachine: Using SSH client type: native
	I1204 12:01:36.066983    2617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100716fc0] 0x100719800 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1204 12:01:36.066986    2617 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 12:01:36.111654    2617 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 12:01:36.111659    2617 buildroot.go:70] root file system type: tmpfs
	I1204 12:01:36.111706    2617 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 12:01:36.111795    2617 main.go:141] libmachine: Using SSH client type: native
	I1204 12:01:36.111904    2617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100716fc0] 0x100719800 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1204 12:01:36.111934    2617 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 12:01:36.162792    2617 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 12:01:36.162840    2617 main.go:141] libmachine: Using SSH client type: native
	I1204 12:01:36.162980    2617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100716fc0] 0x100719800 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1204 12:01:36.162986    2617 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 12:01:36.212439    2617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 12:01:36.212446    2617 machine.go:96] duration metric: took 455.3005ms to provisionDockerMachine
	I1204 12:01:36.212450    2617 start.go:293] postStartSetup for "functional-306000" (driver="qemu2")
	I1204 12:01:36.212455    2617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 12:01:36.212505    2617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 12:01:36.212512    2617 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/functional-306000/id_rsa Username:docker}
	I1204 12:01:36.239392    2617 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 12:01:36.240944    2617 info.go:137] Remote host: Buildroot 2023.02.9
	I1204 12:01:36.240948    2617 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19985-1334/.minikube/addons for local assets ...
	I1204 12:01:36.241034    2617 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19985-1334/.minikube/files for local assets ...
	I1204 12:01:36.241179    2617 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem -> 18562.pem in /etc/ssl/certs
	I1204 12:01:36.241324    2617 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/test/nested/copy/1856/hosts -> hosts in /etc/test/nested/copy/1856
	I1204 12:01:36.241371    2617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1856
	I1204 12:01:36.244984    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem --> /etc/ssl/certs/18562.pem (1708 bytes)
	I1204 12:01:36.253352    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/test/nested/copy/1856/hosts --> /etc/test/nested/copy/1856/hosts (40 bytes)
	I1204 12:01:36.261530    2617 start.go:296] duration metric: took 49.076333ms for postStartSetup
	I1204 12:01:36.261540    2617 fix.go:56] duration metric: took 516.900542ms for fixHost
	I1204 12:01:36.261591    2617 main.go:141] libmachine: Using SSH client type: native
	I1204 12:01:36.261691    2617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100716fc0] 0x100719800 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1204 12:01:36.261694    2617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 12:01:36.307159    2617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733342496.437273898
	
	I1204 12:01:36.307164    2617 fix.go:216] guest clock: 1733342496.437273898
	I1204 12:01:36.307168    2617 fix.go:229] Guest: 2024-12-04 12:01:36.437273898 -0800 PST Remote: 2024-12-04 12:01:36.261541 -0800 PST m=+0.626792001 (delta=175.732898ms)
	I1204 12:01:36.307176    2617 fix.go:200] guest clock delta is within tolerance: 175.732898ms
	I1204 12:01:36.307178    2617 start.go:83] releasing machines lock for "functional-306000", held for 562.54725ms
	I1204 12:01:36.307469    2617 ssh_runner.go:195] Run: cat /version.json
	I1204 12:01:36.307475    2617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 12:01:36.307475    2617 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/functional-306000/id_rsa Username:docker}
	I1204 12:01:36.307488    2617 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/functional-306000/id_rsa Username:docker}
	I1204 12:01:36.374538    2617 ssh_runner.go:195] Run: systemctl --version
	I1204 12:01:36.376566    2617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 12:01:36.378424    2617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 12:01:36.378452    2617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 12:01:36.381890    2617 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1204 12:01:36.381894    2617 start.go:495] detecting cgroup driver to use...
	I1204 12:01:36.381956    2617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 12:01:36.388014    2617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1204 12:01:36.391833    2617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 12:01:36.395691    2617 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 12:01:36.395730    2617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 12:01:36.399757    2617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 12:01:36.403963    2617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 12:01:36.407802    2617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 12:01:36.411866    2617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 12:01:36.416004    2617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 12:01:36.420345    2617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 12:01:36.424588    2617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 12:01:36.428738    2617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 12:01:36.432597    2617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 12:01:36.436228    2617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:01:36.533829    2617 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 12:01:36.544194    2617 start.go:495] detecting cgroup driver to use...
	I1204 12:01:36.544260    2617 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 12:01:36.550861    2617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 12:01:36.556269    2617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 12:01:36.569118    2617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 12:01:36.575067    2617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 12:01:36.580590    2617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 12:01:36.587407    2617 ssh_runner.go:195] Run: which cri-dockerd
	I1204 12:01:36.588686    2617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 12:01:36.592209    2617 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1204 12:01:36.597925    2617 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 12:01:36.710201    2617 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 12:01:36.821190    2617 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 12:01:36.821265    2617 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 12:01:36.828429    2617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:01:36.922903    2617 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 12:01:49.247856    2617 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.325157417s)
	I1204 12:01:49.247938    2617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 12:01:49.253925    2617 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 12:01:49.261959    2617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 12:01:49.267818    2617 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 12:01:49.359011    2617 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 12:01:49.452161    2617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:01:49.532986    2617 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 12:01:49.539689    2617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 12:01:49.545788    2617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:01:49.627906    2617 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 12:01:49.659028    2617 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 12:01:49.659106    2617 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 12:01:49.661501    2617 start.go:563] Will wait 60s for crictl version
	I1204 12:01:49.661568    2617 ssh_runner.go:195] Run: which crictl
	I1204 12:01:49.663097    2617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 12:01:49.677166    2617 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1204 12:01:49.677257    2617 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 12:01:49.684848    2617 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 12:01:49.697190    2617 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1204 12:01:49.697278    2617 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1204 12:01:49.705091    2617 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1204 12:01:49.710210    2617 kubeadm.go:883] updating cluster {Name:functional-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.2 ClusterName:functional-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 12:01:49.710273    2617 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:01:49.710342    2617 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 12:01:49.716502    2617 docker.go:689] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-306000
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1204 12:01:49.716509    2617 docker.go:619] Images already preloaded, skipping extraction
	I1204 12:01:49.716567    2617 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 12:01:49.722104    2617 docker.go:689] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-306000
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1204 12:01:49.722111    2617 cache_images.go:84] Images are preloaded, skipping loading
	I1204 12:01:49.722115    2617 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.2 docker true true} ...
	I1204 12:01:49.722176    2617 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-306000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:functional-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 12:01:49.722244    2617 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1204 12:01:49.737274    2617 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1204 12:01:49.737284    2617 cni.go:84] Creating CNI manager for ""
	I1204 12:01:49.737292    2617 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:01:49.737297    2617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 12:01:49.737306    2617 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-306000 NodeName:functional-306000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 12:01:49.737369    2617 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-306000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.105.4"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 12:01:49.737434    2617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 12:01:49.740934    2617 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 12:01:49.740972    2617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 12:01:49.744242    2617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1204 12:01:49.750229    2617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 12:01:49.756087    2617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I1204 12:01:49.761923    2617 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I1204 12:01:49.763480    2617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:01:49.841777    2617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 12:01:49.847605    2617 certs.go:68] Setting up /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000 for IP: 192.168.105.4
	I1204 12:01:49.847609    2617 certs.go:194] generating shared ca certs ...
	I1204 12:01:49.847616    2617 certs.go:226] acquiring lock for ca certs: {Name:mk686f72a960a82dacaf4c130e092ac78361d077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:01:49.847779    2617 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.key
	I1204 12:01:49.847843    2617 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.key
	I1204 12:01:49.847847    2617 certs.go:256] generating profile certs ...
	I1204 12:01:49.847920    2617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.key
	I1204 12:01:49.847984    2617 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/apiserver.key.f4fdcd4d
	I1204 12:01:49.848042    2617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/proxy-client.key
	I1204 12:01:49.848213    2617 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856.pem (1338 bytes)
	W1204 12:01:49.848248    2617 certs.go:480] ignoring /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856_empty.pem, impossibly tiny 0 bytes
	I1204 12:01:49.848252    2617 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 12:01:49.848270    2617 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem (1082 bytes)
	I1204 12:01:49.848287    2617 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem (1123 bytes)
	I1204 12:01:49.848305    2617 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem (1679 bytes)
	I1204 12:01:49.848341    2617 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem (1708 bytes)
	I1204 12:01:49.848675    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 12:01:49.857431    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 12:01:49.866121    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 12:01:49.874710    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 12:01:49.883469    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1204 12:01:49.891971    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 12:01:49.900382    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 12:01:49.908434    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 12:01:49.916932    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856.pem --> /usr/share/ca-certificates/1856.pem (1338 bytes)
	I1204 12:01:49.925410    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem --> /usr/share/ca-certificates/18562.pem (1708 bytes)
	I1204 12:01:49.933614    2617 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 12:01:49.941857    2617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 12:01:49.948930    2617 ssh_runner.go:195] Run: openssl version
	I1204 12:01:49.951031    2617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1856.pem && ln -fs /usr/share/ca-certificates/1856.pem /etc/ssl/certs/1856.pem"
	I1204 12:01:49.955244    2617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1856.pem
	I1204 12:01:49.957288    2617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:00 /usr/share/ca-certificates/1856.pem
	I1204 12:01:49.957335    2617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1856.pem
	I1204 12:01:49.959586    2617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1856.pem /etc/ssl/certs/51391683.0"
	I1204 12:01:49.963547    2617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18562.pem && ln -fs /usr/share/ca-certificates/18562.pem /etc/ssl/certs/18562.pem"
	I1204 12:01:49.967740    2617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18562.pem
	I1204 12:01:49.969464    2617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:00 /usr/share/ca-certificates/18562.pem
	I1204 12:01:49.969491    2617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18562.pem
	I1204 12:01:49.971606    2617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18562.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 12:01:49.975359    2617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 12:01:49.979561    2617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:01:49.981518    2617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:01:49.981548    2617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:01:49.983680    2617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 12:01:49.987410    2617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 12:01:49.989125    2617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 12:01:49.991168    2617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 12:01:49.993514    2617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 12:01:49.995575    2617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 12:01:49.997671    2617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 12:01:49.999702    2617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 12:01:50.001799    2617 kubeadm.go:392] StartCluster: {Name:functional-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.2 ClusterName:functional-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:01:50.001879    2617 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 12:01:50.007561    2617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 12:01:50.011652    2617 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 12:01:50.011659    2617 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 12:01:50.011691    2617 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 12:01:50.015389    2617 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 12:01:50.015711    2617 kubeconfig.go:125] found "functional-306000" server: "https://192.168.105.4:8441"
	I1204 12:01:50.016368    2617 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 12:01:50.019867    2617 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1204 12:01:50.019881    2617 kubeadm.go:1160] stopping kube-system containers ...
	I1204 12:01:50.019932    2617 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 12:01:50.026713    2617 docker.go:483] Stopping containers: [ec73b612ae0a 1c9e7a4a52a2 19059148b0cc 77c98df0d30b 763e95a0f04e 5e490a066235 7ea2ddc1f6e4 794861fd52de eca2b92c3d51 b95d53fd8e10 df019aaed0d9 e3771e71b702 a2bc8aaf3bd9 595ad033b117 44911dac7556 1ded3460b8db 9e515e1e582a c4c01501f952 dbcb4983f011 1832fc7861a6 f5758f576ee9 0a74494e265c 9daec643174f c184a65103c3 8034d0844733 59c5becc56a9 8179d42de5de 8dfb90c39ed6 61ec3757f678 f702120538b4]
	I1204 12:01:50.026788    2617 ssh_runner.go:195] Run: docker stop ec73b612ae0a 1c9e7a4a52a2 19059148b0cc 77c98df0d30b 763e95a0f04e 5e490a066235 7ea2ddc1f6e4 794861fd52de eca2b92c3d51 b95d53fd8e10 df019aaed0d9 e3771e71b702 a2bc8aaf3bd9 595ad033b117 44911dac7556 1ded3460b8db 9e515e1e582a c4c01501f952 dbcb4983f011 1832fc7861a6 f5758f576ee9 0a74494e265c 9daec643174f c184a65103c3 8034d0844733 59c5becc56a9 8179d42de5de 8dfb90c39ed6 61ec3757f678 f702120538b4
	I1204 12:01:50.034035    2617 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 12:01:50.145817    2617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 12:01:50.152196    2617 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Dec  4 20:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Dec  4 20:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Dec  4 20:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Dec  4 20:01 /etc/kubernetes/scheduler.conf
	
	I1204 12:01:50.152253    2617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1204 12:01:50.157343    2617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1204 12:01:50.161714    2617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1204 12:01:50.166027    2617 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1204 12:01:50.166071    2617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 12:01:50.170359    2617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1204 12:01:50.174393    2617 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1204 12:01:50.174428    2617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 12:01:50.178152    2617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 12:01:50.181724    2617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:01:50.199923    2617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:01:50.670804    2617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:01:50.799918    2617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:01:50.825478    2617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:01:50.853631    2617 api_server.go:52] waiting for apiserver process to appear ...
	I1204 12:01:50.853711    2617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:01:51.356150    2617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:01:51.855798    2617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:01:51.861297    2617 api_server.go:72] duration metric: took 1.007684666s to wait for apiserver process to appear ...
	I1204 12:01:51.861305    2617 api_server.go:88] waiting for apiserver healthz status ...
	I1204 12:01:51.861321    2617 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1204 12:01:53.581292    2617 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 12:01:53.581300    2617 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 12:01:53.581306    2617 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1204 12:01:53.590713    2617 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1204 12:01:53.590720    2617 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1204 12:01:53.863331    2617 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1204 12:01:53.868410    2617 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 12:01:53.868420    2617 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 12:01:54.363368    2617 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1204 12:01:54.367572    2617 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1204 12:01:54.367582    2617 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1204 12:01:54.863349    2617 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1204 12:01:54.868066    2617 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1204 12:01:54.873550    2617 api_server.go:141] control plane version: v1.31.2
	I1204 12:01:54.873556    2617 api_server.go:131] duration metric: took 3.012302833s to wait for apiserver health ...
	I1204 12:01:54.873561    2617 cni.go:84] Creating CNI manager for ""
	I1204 12:01:54.873569    2617 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:01:54.878847    2617 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 12:01:54.881789    2617 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 12:01:54.887540    2617 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 12:01:54.895722    2617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 12:01:54.901297    2617 system_pods.go:59] 7 kube-system pods found
	I1204 12:01:54.901305    2617 system_pods.go:61] "coredns-7c65d6cfc9-5md4n" [d3088b68-7f6a-44fe-a326-0332d0c3a63e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1204 12:01:54.901308    2617 system_pods.go:61] "etcd-functional-306000" [d6451475-bdc1-4355-84ef-d8700450a4a6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1204 12:01:54.901312    2617 system_pods.go:61] "kube-apiserver-functional-306000" [e1576dc0-bf1e-4d4f-b09b-5f20c79eea9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1204 12:01:54.901315    2617 system_pods.go:61] "kube-controller-manager-functional-306000" [7f4558c5-bf00-40f8-9d38-0a8fd5772814] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1204 12:01:54.901318    2617 system_pods.go:61] "kube-proxy-9lcnf" [1c8cfba3-4ec2-4c25-8703-e925be25d558] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1204 12:01:54.901320    2617 system_pods.go:61] "kube-scheduler-functional-306000" [8b772771-926e-4768-8122-791c67b3ca5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1204 12:01:54.901323    2617 system_pods.go:61] "storage-provisioner" [a7d371d3-e7b9-41ee-889a-547c288b743d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1204 12:01:54.901325    2617 system_pods.go:74] duration metric: took 5.598916ms to wait for pod list to return data ...
	I1204 12:01:54.901328    2617 node_conditions.go:102] verifying NodePressure condition ...
	I1204 12:01:54.902930    2617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 12:01:54.902935    2617 node_conditions.go:123] node cpu capacity is 2
	I1204 12:01:54.902940    2617 node_conditions.go:105] duration metric: took 1.610167ms to run NodePressure ...
	I1204 12:01:54.902947    2617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:01:55.125319    2617 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1204 12:01:55.128594    2617 kubeadm.go:739] kubelet initialised
	I1204 12:01:55.128601    2617 kubeadm.go:740] duration metric: took 3.269584ms waiting for restarted kubelet to initialise ...
	I1204 12:01:55.128607    2617 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 12:01:55.132358    2617 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5md4n" in "kube-system" namespace to be "Ready" ...
	I1204 12:01:55.136278    2617 pod_ready.go:93] pod "coredns-7c65d6cfc9-5md4n" in "kube-system" namespace has status "Ready":"True"
	I1204 12:01:55.136286    2617 pod_ready.go:82] duration metric: took 3.92225ms for pod "coredns-7c65d6cfc9-5md4n" in "kube-system" namespace to be "Ready" ...
	I1204 12:01:55.136290    2617 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:01:57.141055    2617 pod_ready.go:103] pod "etcd-functional-306000" in "kube-system" namespace has status "Ready":"False"
	I1204 12:01:59.149098    2617 pod_ready.go:103] pod "etcd-functional-306000" in "kube-system" namespace has status "Ready":"False"
	I1204 12:02:01.149468    2617 pod_ready.go:103] pod "etcd-functional-306000" in "kube-system" namespace has status "Ready":"False"
	I1204 12:02:03.149964    2617 pod_ready.go:103] pod "etcd-functional-306000" in "kube-system" namespace has status "Ready":"False"
	I1204 12:02:05.645026    2617 pod_ready.go:103] pod "etcd-functional-306000" in "kube-system" namespace has status "Ready":"False"
	I1204 12:02:07.648043    2617 pod_ready.go:103] pod "etcd-functional-306000" in "kube-system" namespace has status "Ready":"False"
	I1204 12:02:08.142429    2617 pod_ready.go:93] pod "etcd-functional-306000" in "kube-system" namespace has status "Ready":"True"
	I1204 12:02:08.142442    2617 pod_ready.go:82] duration metric: took 13.006377583s for pod "etcd-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:08.142451    2617 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:08.147033    2617 pod_ready.go:93] pod "kube-apiserver-functional-306000" in "kube-system" namespace has status "Ready":"True"
	I1204 12:02:08.147039    2617 pod_ready.go:82] duration metric: took 4.582833ms for pod "kube-apiserver-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:08.147045    2617 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:08.150739    2617 pod_ready.go:93] pod "kube-controller-manager-functional-306000" in "kube-system" namespace has status "Ready":"True"
	I1204 12:02:08.150745    2617 pod_ready.go:82] duration metric: took 3.695458ms for pod "kube-controller-manager-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:08.150751    2617 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9lcnf" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:08.154545    2617 pod_ready.go:93] pod "kube-proxy-9lcnf" in "kube-system" namespace has status "Ready":"True"
	I1204 12:02:08.154549    2617 pod_ready.go:82] duration metric: took 3.794916ms for pod "kube-proxy-9lcnf" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:08.154555    2617 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:08.157435    2617 pod_ready.go:93] pod "kube-scheduler-functional-306000" in "kube-system" namespace has status "Ready":"True"
	I1204 12:02:08.157439    2617 pod_ready.go:82] duration metric: took 2.880875ms for pod "kube-scheduler-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:08.157445    2617 pod_ready.go:39] duration metric: took 13.029063458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 12:02:08.157462    2617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 12:02:08.164654    2617 ops.go:34] apiserver oom_adj: -16
	I1204 12:02:08.164660    2617 kubeadm.go:597] duration metric: took 18.153320042s to restartPrimaryControlPlane
	I1204 12:02:08.164664    2617 kubeadm.go:394] duration metric: took 18.163189875s to StartCluster
	I1204 12:02:08.164676    2617 settings.go:142] acquiring lock: {Name:mkc9bc1437987e3de306bb25e3c2f4effe0b8b57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:02:08.164855    2617 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:02:08.165366    2617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/kubeconfig: {Name:mk18d42ed20876d07306ef2e0f2006c5dc1a1320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:02:08.165698    2617 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:02:08.165712    2617 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 12:02:08.165760    2617 addons.go:69] Setting storage-provisioner=true in profile "functional-306000"
	I1204 12:02:08.165769    2617 addons.go:234] Setting addon storage-provisioner=true in "functional-306000"
	W1204 12:02:08.165773    2617 addons.go:243] addon storage-provisioner should already be in state true
	I1204 12:02:08.165789    2617 host.go:66] Checking if "functional-306000" exists ...
	I1204 12:02:08.165822    2617 addons.go:69] Setting default-storageclass=true in profile "functional-306000"
	I1204 12:02:08.165833    2617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-306000"
	I1204 12:02:08.165832    2617 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:02:08.167095    2617 addons.go:234] Setting addon default-storageclass=true in "functional-306000"
	W1204 12:02:08.167099    2617 addons.go:243] addon default-storageclass should already be in state true
	I1204 12:02:08.167107    2617 host.go:66] Checking if "functional-306000" exists ...
	I1204 12:02:08.170752    2617 out.go:177] * Verifying Kubernetes components...
	I1204 12:02:08.171186    2617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 12:02:08.175268    2617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 12:02:08.175276    2617 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/functional-306000/id_rsa Username:docker}
	I1204 12:02:08.178752    2617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:02:08.183762    2617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:02:08.187590    2617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 12:02:08.187594    2617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 12:02:08.187606    2617 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/functional-306000/id_rsa Username:docker}
	I1204 12:02:08.288824    2617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 12:02:08.295626    2617 node_ready.go:35] waiting up to 6m0s for node "functional-306000" to be "Ready" ...
	I1204 12:02:08.297856    2617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 12:02:08.335829    2617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 12:02:08.337905    2617 node_ready.go:49] node "functional-306000" has status "Ready":"True"
	I1204 12:02:08.337910    2617 node_ready.go:38] duration metric: took 42.277208ms for node "functional-306000" to be "Ready" ...
	I1204 12:02:08.337914    2617 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 12:02:08.541429    2617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5md4n" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:08.613454    2617 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1204 12:02:08.620413    2617 addons.go:510] duration metric: took 454.720166ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1204 12:02:08.940518    2617 pod_ready.go:93] pod "coredns-7c65d6cfc9-5md4n" in "kube-system" namespace has status "Ready":"True"
	I1204 12:02:08.940527    2617 pod_ready.go:82] duration metric: took 399.096125ms for pod "coredns-7c65d6cfc9-5md4n" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:08.940533    2617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:09.344179    2617 pod_ready.go:93] pod "etcd-functional-306000" in "kube-system" namespace has status "Ready":"True"
	I1204 12:02:09.344209    2617 pod_ready.go:82] duration metric: took 403.672042ms for pod "etcd-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:09.344227    2617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:09.745459    2617 pod_ready.go:93] pod "kube-apiserver-functional-306000" in "kube-system" namespace has status "Ready":"True"
	I1204 12:02:09.745492    2617 pod_ready.go:82] duration metric: took 401.259333ms for pod "kube-apiserver-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:09.745516    2617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:10.145356    2617 pod_ready.go:93] pod "kube-controller-manager-functional-306000" in "kube-system" namespace has status "Ready":"True"
	I1204 12:02:10.145384    2617 pod_ready.go:82] duration metric: took 399.859417ms for pod "kube-controller-manager-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:10.145404    2617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9lcnf" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:10.545892    2617 pod_ready.go:93] pod "kube-proxy-9lcnf" in "kube-system" namespace has status "Ready":"True"
	I1204 12:02:10.545927    2617 pod_ready.go:82] duration metric: took 400.514125ms for pod "kube-proxy-9lcnf" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:10.545947    2617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:10.940567    2617 pod_ready.go:93] pod "kube-scheduler-functional-306000" in "kube-system" namespace has status "Ready":"True"
	I1204 12:02:10.940577    2617 pod_ready.go:82] duration metric: took 394.629625ms for pod "kube-scheduler-functional-306000" in "kube-system" namespace to be "Ready" ...
	I1204 12:02:10.940585    2617 pod_ready.go:39] duration metric: took 2.602712666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 12:02:10.940599    2617 api_server.go:52] waiting for apiserver process to appear ...
	I1204 12:02:10.940747    2617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:02:10.950075    2617 api_server.go:72] duration metric: took 2.784412916s to wait for apiserver process to appear ...
	I1204 12:02:10.950082    2617 api_server.go:88] waiting for apiserver healthz status ...
	I1204 12:02:10.950094    2617 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1204 12:02:10.954540    2617 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1204 12:02:10.955241    2617 api_server.go:141] control plane version: v1.31.2
	I1204 12:02:10.955253    2617 api_server.go:131] duration metric: took 5.162208ms to wait for apiserver health ...
	I1204 12:02:10.955257    2617 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 12:02:11.148426    2617 system_pods.go:59] 7 kube-system pods found
	I1204 12:02:11.148439    2617 system_pods.go:61] "coredns-7c65d6cfc9-5md4n" [d3088b68-7f6a-44fe-a326-0332d0c3a63e] Running
	I1204 12:02:11.148444    2617 system_pods.go:61] "etcd-functional-306000" [d6451475-bdc1-4355-84ef-d8700450a4a6] Running
	I1204 12:02:11.148454    2617 system_pods.go:61] "kube-apiserver-functional-306000" [08beabc7-6dcb-4e33-8249-42ca44212a36] Running
	I1204 12:02:11.148458    2617 system_pods.go:61] "kube-controller-manager-functional-306000" [7f4558c5-bf00-40f8-9d38-0a8fd5772814] Running
	I1204 12:02:11.148461    2617 system_pods.go:61] "kube-proxy-9lcnf" [1c8cfba3-4ec2-4c25-8703-e925be25d558] Running
	I1204 12:02:11.148464    2617 system_pods.go:61] "kube-scheduler-functional-306000" [8b772771-926e-4768-8122-791c67b3ca5d] Running
	I1204 12:02:11.148467    2617 system_pods.go:61] "storage-provisioner" [a7d371d3-e7b9-41ee-889a-547c288b743d] Running
	I1204 12:02:11.148472    2617 system_pods.go:74] duration metric: took 193.21425ms to wait for pod list to return data ...
	I1204 12:02:11.148478    2617 default_sa.go:34] waiting for default service account to be created ...
	I1204 12:02:11.345756    2617 default_sa.go:45] found service account: "default"
	I1204 12:02:11.345789    2617 default_sa.go:55] duration metric: took 197.306667ms for default service account to be created ...
	I1204 12:02:11.345806    2617 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 12:02:11.551896    2617 system_pods.go:86] 7 kube-system pods found
	I1204 12:02:11.551928    2617 system_pods.go:89] "coredns-7c65d6cfc9-5md4n" [d3088b68-7f6a-44fe-a326-0332d0c3a63e] Running
	I1204 12:02:11.551937    2617 system_pods.go:89] "etcd-functional-306000" [d6451475-bdc1-4355-84ef-d8700450a4a6] Running
	I1204 12:02:11.551943    2617 system_pods.go:89] "kube-apiserver-functional-306000" [08beabc7-6dcb-4e33-8249-42ca44212a36] Running
	I1204 12:02:11.551949    2617 system_pods.go:89] "kube-controller-manager-functional-306000" [7f4558c5-bf00-40f8-9d38-0a8fd5772814] Running
	I1204 12:02:11.551953    2617 system_pods.go:89] "kube-proxy-9lcnf" [1c8cfba3-4ec2-4c25-8703-e925be25d558] Running
	I1204 12:02:11.551958    2617 system_pods.go:89] "kube-scheduler-functional-306000" [8b772771-926e-4768-8122-791c67b3ca5d] Running
	I1204 12:02:11.551964    2617 system_pods.go:89] "storage-provisioner" [a7d371d3-e7b9-41ee-889a-547c288b743d] Running
	I1204 12:02:11.551977    2617 system_pods.go:126] duration metric: took 206.16675ms to wait for k8s-apps to be running ...
	I1204 12:02:11.551994    2617 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 12:02:11.552269    2617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 12:02:11.572161    2617 system_svc.go:56] duration metric: took 20.161792ms WaitForService to wait for kubelet
	I1204 12:02:11.572177    2617 kubeadm.go:582] duration metric: took 3.40652425s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 12:02:11.572203    2617 node_conditions.go:102] verifying NodePressure condition ...
	I1204 12:02:11.745655    2617 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1204 12:02:11.745675    2617 node_conditions.go:123] node cpu capacity is 2
	I1204 12:02:11.745699    2617 node_conditions.go:105] duration metric: took 173.491625ms to run NodePressure ...
	I1204 12:02:11.745731    2617 start.go:241] waiting for startup goroutines ...
	I1204 12:02:11.745748    2617 start.go:246] waiting for cluster config update ...
	I1204 12:02:11.745773    2617 start.go:255] writing updated cluster config ...
	I1204 12:02:11.747372    2617 ssh_runner.go:195] Run: rm -f paused
	I1204 12:02:11.815148    2617 start.go:600] kubectl: 1.30.2, cluster: 1.31.2 (minor skew: 1)
	I1204 12:02:11.820259    2617 out.go:177] * Done! kubectl is now configured to use "functional-306000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Dec 04 20:02:56 functional-306000 dockerd[5668]: time="2024-12-04T20:02:56.527339136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 20:02:56 functional-306000 dockerd[5668]: time="2024-12-04T20:02:56.527349470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 20:02:56 functional-306000 dockerd[5668]: time="2024-12-04T20:02:56.527391221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 20:02:56 functional-306000 cri-dockerd[5939]: time="2024-12-04T20:02:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f56cbb7f010bee01f5284445195d5d4c9ed94d41ba6751f21b3358e971076c2e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 04 20:02:58 functional-306000 cri-dockerd[5939]: time="2024-12-04T20:02:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Dec 04 20:02:58 functional-306000 dockerd[5668]: time="2024-12-04T20:02:58.300980658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 20:02:58 functional-306000 dockerd[5668]: time="2024-12-04T20:02:58.301176249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 20:02:58 functional-306000 dockerd[5668]: time="2024-12-04T20:02:58.301186958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 20:02:58 functional-306000 dockerd[5668]: time="2024-12-04T20:02:58.301261128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 20:02:58 functional-306000 dockerd[5662]: time="2024-12-04T20:02:58.341950681Z" level=info msg="ignoring event" container=b39a13ea3ae6c640e12891ac8e2c645080c6587141422868b6ec1e843af6d4fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 04 20:02:58 functional-306000 dockerd[5668]: time="2024-12-04T20:02:58.342188691Z" level=info msg="shim disconnected" id=b39a13ea3ae6c640e12891ac8e2c645080c6587141422868b6ec1e843af6d4fe namespace=moby
	Dec 04 20:02:58 functional-306000 dockerd[5668]: time="2024-12-04T20:02:58.342221026Z" level=warning msg="cleaning up after shim disconnected" id=b39a13ea3ae6c640e12891ac8e2c645080c6587141422868b6ec1e843af6d4fe namespace=moby
	Dec 04 20:02:58 functional-306000 dockerd[5668]: time="2024-12-04T20:02:58.342225151Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 04 20:03:00 functional-306000 dockerd[5662]: time="2024-12-04T20:03:00.137942083Z" level=info msg="ignoring event" container=f56cbb7f010bee01f5284445195d5d4c9ed94d41ba6751f21b3358e971076c2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 04 20:03:00 functional-306000 dockerd[5668]: time="2024-12-04T20:03:00.138150466Z" level=info msg="shim disconnected" id=f56cbb7f010bee01f5284445195d5d4c9ed94d41ba6751f21b3358e971076c2e namespace=moby
	Dec 04 20:03:00 functional-306000 dockerd[5668]: time="2024-12-04T20:03:00.138181384Z" level=warning msg="cleaning up after shim disconnected" id=f56cbb7f010bee01f5284445195d5d4c9ed94d41ba6751f21b3358e971076c2e namespace=moby
	Dec 04 20:03:00 functional-306000 dockerd[5668]: time="2024-12-04T20:03:00.138185676Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 04 20:03:02 functional-306000 dockerd[5668]: time="2024-12-04T20:03:02.041241815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 20:03:02 functional-306000 dockerd[5668]: time="2024-12-04T20:03:02.041332819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 20:03:02 functional-306000 dockerd[5668]: time="2024-12-04T20:03:02.041357154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 20:03:02 functional-306000 dockerd[5668]: time="2024-12-04T20:03:02.041435615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 20:03:02 functional-306000 dockerd[5662]: time="2024-12-04T20:03:02.075247754Z" level=info msg="ignoring event" container=09d9f632a8e2f920f71a504a17519433222874954724d001fa881482123db56f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 04 20:03:02 functional-306000 dockerd[5668]: time="2024-12-04T20:03:02.075397886Z" level=info msg="shim disconnected" id=09d9f632a8e2f920f71a504a17519433222874954724d001fa881482123db56f namespace=moby
	Dec 04 20:03:02 functional-306000 dockerd[5668]: time="2024-12-04T20:03:02.075428304Z" level=warning msg="cleaning up after shim disconnected" id=09d9f632a8e2f920f71a504a17519433222874954724d001fa881482123db56f namespace=moby
	Dec 04 20:03:02 functional-306000 dockerd[5668]: time="2024-12-04T20:03:02.075432554Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	09d9f632a8e2f       72565bf5bbedf                                                                                         1 second ago         Exited              echoserver-arm            2                   575fdab1189a1       hello-node-64b4f8f9ff-vz2rs
	b39a13ea3ae6c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 seconds ago        Exited              mount-munger              0                   f56cbb7f010be       busybox-mount
	f841eafd75832       72565bf5bbedf                                                                                         9 seconds ago        Exited              echoserver-arm            2                   cd913096b66e1       hello-node-connect-65d86f57f4-f2pbl
	1859484fe81ee       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            1                   575fdab1189a1       hello-node-64b4f8f9ff-vz2rs
	b26e7d20716a5       nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be                         21 seconds ago       Running             myfrontend                0                   3b31a89b5454e       sp-pod
	fbd4bcef8d62b       nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         38 seconds ago       Running             nginx                     0                   14d739a6532f9       nginx-svc
	cfc440b7b5d7c       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   b3c0be8a9f871       coredns-7c65d6cfc9-5md4n
	9310b8dc4d955       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   e1f9199993520       storage-provisioner
	11f2e07dec5fc       021d242013305                                                                                         About a minute ago   Running             kube-proxy                2                   1039914d11f1b       kube-proxy-9lcnf
	f34347afdccbe       9404aea098d9e                                                                                         About a minute ago   Running             kube-controller-manager   2                   dd063abd8c59f       kube-controller-manager-functional-306000
	402b976c7bbe2       d6b061e73ae45                                                                                         About a minute ago   Running             kube-scheduler            2                   31ecade83a8db       kube-scheduler-functional-306000
	fa03ec137dbaa       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   fc27d3e9826c6       etcd-functional-306000
	bbd847ec66454       f9c26480f1e72                                                                                         About a minute ago   Running             kube-apiserver            0                   5f8d4c1f809a6       kube-apiserver-functional-306000
	ec73b612ae0a1       2f6c962e7b831                                                                                         About a minute ago   Exited              coredns                   1                   77c98df0d30bb       coredns-7c65d6cfc9-5md4n
	1c9e7a4a52a21       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   5e490a066235e       storage-provisioner
	19059148b0cc5       021d242013305                                                                                         About a minute ago   Exited              kube-proxy                1                   763e95a0f04ed       kube-proxy-9lcnf
	7ea2ddc1f6e4f       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   a2bc8aaf3bd99       etcd-functional-306000
	794861fd52de9       d6b061e73ae45                                                                                         About a minute ago   Exited              kube-scheduler            1                   595ad033b117d       kube-scheduler-functional-306000
	b95d53fd8e109       9404aea098d9e                                                                                         About a minute ago   Exited              kube-controller-manager   1                   df019aaed0d98       kube-controller-manager-functional-306000
	
	
	==> coredns [cfc440b7b5d7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36325 - 65158 "HINFO IN 3088599290541658946.4577580000351982978. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011043965s
	[INFO] 10.244.0.1:8275 - 4776 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000088004s
	[INFO] 10.244.0.1:53616 - 3095 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.00011638s
	[INFO] 10.244.0.1:16066 - 7528 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001042629s
	[INFO] 10.244.0.1:44257 - 62659 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000021459s
	[INFO] 10.244.0.1:24636 - 57266 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000062836s
	[INFO] 10.244.0.1:25625 - 26406 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000094004s
	
	
	==> coredns [ec73b612ae0a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37391 - 45166 "HINFO IN 4390293926329659247.4006578515912338001. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011162559s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-306000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-306000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=functional-306000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T12_00_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:00:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-306000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 20:02:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:02:54 +0000   Wed, 04 Dec 2024 20:00:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:02:54 +0000   Wed, 04 Dec 2024 20:00:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:02:54 +0000   Wed, 04 Dec 2024 20:00:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:02:54 +0000   Wed, 04 Dec 2024 20:00:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-306000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 2646e394e9674c899171a82ee9ae2db3
	  System UUID:                2646e394e9674c899171a82ee9ae2db3
	  Boot ID:                    931b258e-7e80-483c-a850-d5fc8daee54d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-vz2rs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  default                     hello-node-connect-65d86f57f4-f2pbl          0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 coredns-7c65d6cfc9-5md4n                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m20s
	  kube-system                 etcd-functional-306000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m26s
	  kube-system                 kube-apiserver-functional-306000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-controller-manager-functional-306000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-9lcnf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-functional-306000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m20s                  kube-proxy       
	  Normal  Starting                 68s                    kube-proxy       
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m26s (x2 over 2m26s)  kubelet          Node functional-306000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m26s (x2 over 2m26s)  kubelet          Node functional-306000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m26s (x2 over 2m26s)  kubelet          Node functional-306000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m22s                  node-controller  Node functional-306000 event: Registered Node functional-306000 in Controller
	  Normal  NodeReady                2m22s                  kubelet          Node functional-306000 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node functional-306000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node functional-306000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 117s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     117s (x7 over 117s)    kubelet          Node functional-306000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  117s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           111s                   node-controller  Node functional-306000 event: Registered Node functional-306000 in Controller
	  Normal  Starting                 72s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s (x8 over 72s)      kubelet          Node functional-306000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x8 over 72s)      kubelet          Node functional-306000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x7 over 72s)      kubelet          Node functional-306000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           65s                    node-controller  Node functional-306000 event: Registered Node functional-306000 in Controller
	
	
	==> dmesg <==
	[  +7.247806] kauditd_printk_skb: 33 callbacks suppressed
	[  +8.913723] systemd-fstab-generator[4747]: Ignoring "noauto" option for root device
	[ +11.630396] systemd-fstab-generator[5175]: Ignoring "noauto" option for root device
	[  +0.056427] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.123139] systemd-fstab-generator[5210]: Ignoring "noauto" option for root device
	[  +0.114782] systemd-fstab-generator[5222]: Ignoring "noauto" option for root device
	[  +0.100938] systemd-fstab-generator[5236]: Ignoring "noauto" option for root device
	[  +5.113632] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.336694] systemd-fstab-generator[5891]: Ignoring "noauto" option for root device
	[  +0.091083] systemd-fstab-generator[5904]: Ignoring "noauto" option for root device
	[  +0.081714] systemd-fstab-generator[5916]: Ignoring "noauto" option for root device
	[  +0.096825] systemd-fstab-generator[5931]: Ignoring "noauto" option for root device
	[  +0.214551] systemd-fstab-generator[6095]: Ignoring "noauto" option for root device
	[  +0.950024] systemd-fstab-generator[6216]: Ignoring "noauto" option for root device
	[  +3.411410] kauditd_printk_skb: 199 callbacks suppressed
	[Dec 4 20:02] systemd-fstab-generator[7262]: Ignoring "noauto" option for root device
	[  +0.053455] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.637834] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.753086] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.040698] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.671428] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.117140] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.298075] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.102709] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.016112] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [7ea2ddc1f6e4] <==
	{"level":"info","ts":"2024-12-04T20:01:07.459861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-04T20:01:07.459934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-12-04T20:01:07.459965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-12-04T20:01:07.459982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-12-04T20:01:07.460014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-12-04T20:01:07.460059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-12-04T20:01:07.465146Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-306000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T20:01:07.465289Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T20:01:07.465866Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T20:01:07.465910Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T20:01:07.465940Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T20:01:07.467240Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T20:01:07.467240Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T20:01:07.469247Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-04T20:01:07.469628Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-12-04T20:01:37.100808Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-04T20:01:37.100834Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-306000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-12-04T20:01:37.100867Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-04T20:01:37.100913Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-04T20:01:37.110207Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-04T20:01:37.110230Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-04T20:01:37.110264Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-12-04T20:01:37.112772Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-04T20:01:37.112836Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-04T20:01:37.112841Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-306000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [fa03ec137dba] <==
	{"level":"info","ts":"2024-12-04T20:01:52.170461Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-04T20:01:52.171425Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-04T20:01:52.171535Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-04T20:01:52.165427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-12-04T20:01:52.174503Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-12-04T20:01:52.174556Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T20:01:52.174581Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T20:01:52.174920Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-04T20:01:52.174941Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-04T20:01:53.210666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-04T20:01:53.210814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-04T20:01:53.210873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-12-04T20:01:53.210910Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-12-04T20:01:53.210926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-12-04T20:01:53.210952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-12-04T20:01:53.211014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-12-04T20:01:53.216013Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T20:01:53.216709Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T20:01:53.216032Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-306000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T20:01:53.217254Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T20:01:53.217304Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T20:01:53.218747Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T20:01:53.218747Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T20:01:53.221689Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-04T20:01:53.223639Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 20:03:03 up 2 min,  0 users,  load average: 0.36, 0.35, 0.15
	Linux functional-306000 5.10.207 #1 SMP PREEMPT Wed Nov 6 19:14:02 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bbd847ec6645] <==
	I1204 20:01:53.814874       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1204 20:01:53.814926       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1204 20:01:53.814952       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1204 20:01:53.815249       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1204 20:01:53.815396       1 shared_informer.go:320] Caches are synced for configmaps
	I1204 20:01:53.815452       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1204 20:01:53.815591       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1204 20:01:53.818283       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1204 20:01:53.824341       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 20:01:54.708554       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1204 20:01:54.822093       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I1204 20:01:54.822821       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 20:01:54.824349       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 20:01:55.067847       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 20:01:55.071618       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 20:01:55.082632       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 20:01:55.090286       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1204 20:01:55.092286       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1204 20:02:13.380700       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.188.234"}
	I1204 20:02:20.802597       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.107.239.212"}
	I1204 20:02:30.220925       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1204 20:02:30.269411       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.72.16"}
	E1204 20:02:39.779004       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49702: use of closed network connection
	E1204 20:02:47.848104       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49712: use of closed network connection
	I1204 20:02:47.931009       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.143.161"}
	
	
	==> kube-controller-manager [b95d53fd8e10] <==
	I1204 20:01:11.324775       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1204 20:01:11.324839       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1204 20:01:11.327985       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1204 20:01:11.328020       1 shared_informer.go:320] Caches are synced for HPA
	I1204 20:01:11.352304       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1204 20:01:11.352352       1 shared_informer.go:320] Caches are synced for expand
	I1204 20:01:11.352574       1 shared_informer.go:320] Caches are synced for deployment
	I1204 20:01:11.352624       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1204 20:01:11.354155       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1204 20:01:11.354210       1 shared_informer.go:320] Caches are synced for GC
	I1204 20:01:11.354211       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1204 20:01:11.354834       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1204 20:01:11.355049       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="89.39µs"
	I1204 20:01:11.402646       1 shared_informer.go:320] Caches are synced for cronjob
	I1204 20:01:11.407758       1 shared_informer.go:320] Caches are synced for daemon sets
	I1204 20:01:11.415263       1 shared_informer.go:320] Caches are synced for stateful set
	I1204 20:01:11.522992       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1204 20:01:11.553339       1 shared_informer.go:320] Caches are synced for disruption
	I1204 20:01:11.554864       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 20:01:11.555480       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 20:01:11.963162       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 20:01:12.013475       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 20:01:12.013780       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1204 20:01:16.054708       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.686366ms"
	I1204 20:01:16.054901       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.717µs"
	
	
	==> kube-controller-manager [f34347afdccb] <==
	I1204 20:01:57.096962       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1204 20:01:57.139649       1 shared_informer.go:320] Caches are synced for disruption
	I1204 20:01:57.166715       1 shared_informer.go:320] Caches are synced for attach detach
	I1204 20:01:57.282110       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 20:01:57.291724       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 20:01:57.702878       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 20:01:57.752157       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 20:01:57.752190       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1204 20:02:24.272200       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-306000"
	I1204 20:02:30.245126       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="20.782461ms"
	I1204 20:02:30.250787       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="5.633749ms"
	I1204 20:02:30.261941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="11.053572ms"
	I1204 20:02:30.261991       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="31.543µs"
	I1204 20:02:38.591325       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="48.461µs"
	I1204 20:02:39.632723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="40.418µs"
	I1204 20:02:40.686982       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.585µs"
	I1204 20:02:47.898272       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="10.631206ms"
	I1204 20:02:47.906486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="8.183768ms"
	I1204 20:02:47.906723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="16.001µs"
	I1204 20:02:48.846140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="40.627µs"
	I1204 20:02:49.866566       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="34.377µs"
	I1204 20:02:54.857327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-306000"
	I1204 20:02:54.951543       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="28.085µs"
	I1204 20:03:02.006233       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="48.586µs"
	I1204 20:03:03.097359       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="20.917µs"
	
	
	==> kube-proxy [11f2e07dec5f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 20:01:54.536385       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 20:01:54.540491       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1204 20:01:54.540539       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 20:01:54.548155       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 20:01:54.548169       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 20:01:54.548180       1 server_linux.go:169] "Using iptables Proxier"
	I1204 20:01:54.548762       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 20:01:54.548841       1 server.go:483] "Version info" version="v1.31.2"
	I1204 20:01:54.548848       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 20:01:54.549246       1 config.go:199] "Starting service config controller"
	I1204 20:01:54.549259       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 20:01:54.549269       1 config.go:105] "Starting endpoint slice config controller"
	I1204 20:01:54.549271       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 20:01:54.549441       1 config.go:328] "Starting node config controller"
	I1204 20:01:54.549448       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 20:01:54.649718       1 shared_informer.go:320] Caches are synced for node config
	I1204 20:01:54.649721       1 shared_informer.go:320] Caches are synced for service config
	I1204 20:01:54.649738       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [19059148b0cc] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1204 20:01:09.051730       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1204 20:01:09.055402       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1204 20:01:09.055427       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 20:01:09.065690       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1204 20:01:09.065710       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1204 20:01:09.065722       1 server_linux.go:169] "Using iptables Proxier"
	I1204 20:01:09.066519       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 20:01:09.066610       1 server.go:483] "Version info" version="v1.31.2"
	I1204 20:01:09.066615       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 20:01:09.067048       1 config.go:199] "Starting service config controller"
	I1204 20:01:09.067056       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 20:01:09.067066       1 config.go:105] "Starting endpoint slice config controller"
	I1204 20:01:09.067068       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 20:01:09.067234       1 config.go:328] "Starting node config controller"
	I1204 20:01:09.067236       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 20:01:09.167422       1 shared_informer.go:320] Caches are synced for node config
	I1204 20:01:09.167422       1 shared_informer.go:320] Caches are synced for service config
	I1204 20:01:09.167469       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [402b976c7bbe] <==
	I1204 20:01:52.426859       1 serving.go:386] Generated self-signed cert in-memory
	W1204 20:01:53.720567       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1204 20:01:53.720584       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 20:01:53.720589       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 20:01:53.720592       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 20:01:53.739243       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1204 20:01:53.739914       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 20:01:53.740995       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 20:01:53.741034       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 20:01:53.741090       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1204 20:01:53.741118       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1204 20:01:53.843504       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [794861fd52de] <==
	I1204 20:01:06.660683       1 serving.go:386] Generated self-signed cert in-memory
	W1204 20:01:07.996279       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1204 20:01:07.996377       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 20:01:07.999293       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 20:01:07.999328       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 20:01:08.020114       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1204 20:01:08.020129       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 20:01:08.021102       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 20:01:08.021135       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 20:01:08.021223       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1204 20:01:08.021278       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1204 20:01:08.122067       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 20:01:37.088757       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1204 20:01:37.088803       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1204 20:01:37.088852       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 04 20:02:49 functional-306000 kubelet[6223]: I1204 20:02:49.856681    6223 scope.go:117] "RemoveContainer" containerID="1859484fe81ee231723902263081be77129ceee9b06c271c694073b99b85681b"
	Dec 04 20:02:49 functional-306000 kubelet[6223]: E1204 20:02:49.856815    6223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-vz2rs_default(c84bba00-e47e-473e-b1bc-1fc337e8f3f8)\"" pod="default/hello-node-64b4f8f9ff-vz2rs" podUID="c84bba00-e47e-473e-b1bc-1fc337e8f3f8"
	Dec 04 20:02:50 functional-306000 kubelet[6223]: E1204 20:02:50.991362    6223 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 04 20:02:50 functional-306000 kubelet[6223]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 04 20:02:50 functional-306000 kubelet[6223]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 04 20:02:50 functional-306000 kubelet[6223]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 04 20:02:50 functional-306000 kubelet[6223]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 04 20:02:51 functional-306000 kubelet[6223]: I1204 20:02:51.076541    6223 scope.go:117] "RemoveContainer" containerID="eca2b92c3d5162a94422a08dba6e0da6c38a0d5f2954f1d7670c1fb9dddb4c75"
	Dec 04 20:02:53 functional-306000 kubelet[6223]: I1204 20:02:53.982531    6223 scope.go:117] "RemoveContainer" containerID="83ce21e57ab60a243683519ebcf5e57bb885af2e8c9fd709faf551139a44d311"
	Dec 04 20:02:54 functional-306000 kubelet[6223]: I1204 20:02:54.944825    6223 scope.go:117] "RemoveContainer" containerID="83ce21e57ab60a243683519ebcf5e57bb885af2e8c9fd709faf551139a44d311"
	Dec 04 20:02:54 functional-306000 kubelet[6223]: I1204 20:02:54.944977    6223 scope.go:117] "RemoveContainer" containerID="f841eafd7583292283577759a807b7186cf01a033b947505a1f032a7d35e1cbb"
	Dec 04 20:02:54 functional-306000 kubelet[6223]: E1204 20:02:54.945049    6223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-f2pbl_default(376abad6-fec2-4639-8572-9a5341918b6a)\"" pod="default/hello-node-connect-65d86f57f4-f2pbl" podUID="376abad6-fec2-4639-8572-9a5341918b6a"
	Dec 04 20:02:56 functional-306000 kubelet[6223]: I1204 20:02:56.350327    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/db2d25dd-2a09-4d2a-8619-380af88f773b-test-volume\") pod \"busybox-mount\" (UID: \"db2d25dd-2a09-4d2a-8619-380af88f773b\") " pod="default/busybox-mount"
	Dec 04 20:02:56 functional-306000 kubelet[6223]: I1204 20:02:56.350396    6223 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99mj8\" (UniqueName: \"kubernetes.io/projected/db2d25dd-2a09-4d2a-8619-380af88f773b-kube-api-access-99mj8\") pod \"busybox-mount\" (UID: \"db2d25dd-2a09-4d2a-8619-380af88f773b\") " pod="default/busybox-mount"
	Dec 04 20:03:00 functional-306000 kubelet[6223]: I1204 20:03:00.296741    6223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/db2d25dd-2a09-4d2a-8619-380af88f773b-test-volume\") pod \"db2d25dd-2a09-4d2a-8619-380af88f773b\" (UID: \"db2d25dd-2a09-4d2a-8619-380af88f773b\") "
	Dec 04 20:03:00 functional-306000 kubelet[6223]: I1204 20:03:00.296774    6223 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99mj8\" (UniqueName: \"kubernetes.io/projected/db2d25dd-2a09-4d2a-8619-380af88f773b-kube-api-access-99mj8\") pod \"db2d25dd-2a09-4d2a-8619-380af88f773b\" (UID: \"db2d25dd-2a09-4d2a-8619-380af88f773b\") "
	Dec 04 20:03:00 functional-306000 kubelet[6223]: I1204 20:03:00.296892    6223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db2d25dd-2a09-4d2a-8619-380af88f773b-test-volume" (OuterVolumeSpecName: "test-volume") pod "db2d25dd-2a09-4d2a-8619-380af88f773b" (UID: "db2d25dd-2a09-4d2a-8619-380af88f773b"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 04 20:03:00 functional-306000 kubelet[6223]: I1204 20:03:00.297503    6223 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2d25dd-2a09-4d2a-8619-380af88f773b-kube-api-access-99mj8" (OuterVolumeSpecName: "kube-api-access-99mj8") pod "db2d25dd-2a09-4d2a-8619-380af88f773b" (UID: "db2d25dd-2a09-4d2a-8619-380af88f773b"). InnerVolumeSpecName "kube-api-access-99mj8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 04 20:03:00 functional-306000 kubelet[6223]: I1204 20:03:00.396922    6223 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/db2d25dd-2a09-4d2a-8619-380af88f773b-test-volume\") on node \"functional-306000\" DevicePath \"\""
	Dec 04 20:03:00 functional-306000 kubelet[6223]: I1204 20:03:00.396934    6223 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-99mj8\" (UniqueName: \"kubernetes.io/projected/db2d25dd-2a09-4d2a-8619-380af88f773b-kube-api-access-99mj8\") on node \"functional-306000\" DevicePath \"\""
	Dec 04 20:03:01 functional-306000 kubelet[6223]: I1204 20:03:01.047269    6223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f56cbb7f010bee01f5284445195d5d4c9ed94d41ba6751f21b3358e971076c2e"
	Dec 04 20:03:01 functional-306000 kubelet[6223]: I1204 20:03:01.984505    6223 scope.go:117] "RemoveContainer" containerID="1859484fe81ee231723902263081be77129ceee9b06c271c694073b99b85681b"
	Dec 04 20:03:03 functional-306000 kubelet[6223]: I1204 20:03:03.080320    6223 scope.go:117] "RemoveContainer" containerID="1859484fe81ee231723902263081be77129ceee9b06c271c694073b99b85681b"
	Dec 04 20:03:03 functional-306000 kubelet[6223]: I1204 20:03:03.080595    6223 scope.go:117] "RemoveContainer" containerID="09d9f632a8e2f920f71a504a17519433222874954724d001fa881482123db56f"
	Dec 04 20:03:03 functional-306000 kubelet[6223]: E1204 20:03:03.080667    6223 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-vz2rs_default(c84bba00-e47e-473e-b1bc-1fc337e8f3f8)\"" pod="default/hello-node-64b4f8f9ff-vz2rs" podUID="c84bba00-e47e-473e-b1bc-1fc337e8f3f8"
	
	
	==> storage-provisioner [1c9e7a4a52a2] <==
	I1204 20:01:09.018517       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 20:01:09.026000       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 20:01:09.028271       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 20:01:26.447328       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 20:01:26.447745       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-306000_d49b1d2b-4d7f-4c00-b7e6-5bf5539ef151!
	I1204 20:01:26.448210       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82c1eb4f-0109-4b12-85d5-c09987776840", APIVersion:"v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-306000_d49b1d2b-4d7f-4c00-b7e6-5bf5539ef151 became leader
	I1204 20:01:26.549217       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-306000_d49b1d2b-4d7f-4c00-b7e6-5bf5539ef151!
	
	
	==> storage-provisioner [9310b8dc4d95] <==
	I1204 20:01:54.493977       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 20:01:54.523703       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 20:01:54.523766       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 20:02:11.929380       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 20:02:11.929634       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-306000_220af4fe-2566-49db-bb25-1c903a38b7ee!
	I1204 20:02:11.930004       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82c1eb4f-0109-4b12-85d5-c09987776840", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-306000_220af4fe-2566-49db-bb25-1c903a38b7ee became leader
	I1204 20:02:12.029773       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-306000_220af4fe-2566-49db-bb25-1c903a38b7ee!
	I1204 20:02:27.598945       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1204 20:02:27.599247       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9ff8c40e-56fe-4956-8f6b-68baec01aeec", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1204 20:02:27.598977       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    5366dd28-ddb2-41e8-b2f4-3907a040dfd4 326 0 2024-12-04 20:00:42 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-12-04 20:00:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-9ff8c40e-56fe-4956-8f6b-68baec01aeec &PersistentVolumeClaim{ObjectMeta:{myclaim  default  9ff8c40e-56fe-4956-8f6b-68baec01aeec 652 0 2024-12-04 20:02:27 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-12-04 20:02:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-12-04 20:02:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1204 20:02:27.600240       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-9ff8c40e-56fe-4956-8f6b-68baec01aeec" provisioned
	I1204 20:02:27.600287       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1204 20:02:27.600303       1 volume_store.go:212] Trying to save persistentvolume "pvc-9ff8c40e-56fe-4956-8f6b-68baec01aeec"
	I1204 20:02:27.605444       1 volume_store.go:219] persistentvolume "pvc-9ff8c40e-56fe-4956-8f6b-68baec01aeec" saved
	I1204 20:02:27.605868       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9ff8c40e-56fe-4956-8f6b-68baec01aeec", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-9ff8c40e-56fe-4956-8f6b-68baec01aeec
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-306000 -n functional-306000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-306000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-306000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-306000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-306000/192.168.105.4
	Start Time:       Wed, 04 Dec 2024 12:02:56 -0800
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://b39a13ea3ae6c640e12891ac8e2c645080c6587141422868b6ec1e843af6d4fe
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 04 Dec 2024 12:02:58 -0800
	      Finished:     Wed, 04 Dec 2024 12:02:58 -0800
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-99mj8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-99mj8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/busybox-mount to functional-306000
	  Normal  Pulling    7s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.685s (1.685s including waiting). Image size: 3547125 bytes.
	  Normal  Created    5s    kubelet            Created container mount-munger
	  Normal  Started    5s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (33.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (725.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-990000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E1204 12:03:31.538095    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:05:47.648183    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:06:15.379109    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:20.572940    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:20.580653    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:20.594062    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:20.616929    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:20.660320    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:20.743757    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:20.907259    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:21.230727    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:21.874416    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:23.158125    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:25.721924    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:30.845579    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:07:41.089195    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:08:01.570865    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:08:42.534683    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:10:04.457052    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:10:47.644320    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:12:20.567271    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:12:48.297925    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-990000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 52 (12m5.301046583s)

                                                
                                                
-- stdout --
	* [ha-990000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-990000" primary control-plane node in "ha-990000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Deleting "ha-990000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:03:13.974117    2996 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:03:13.974270    2996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:03:13.974274    2996 out.go:358] Setting ErrFile to fd 2...
	I1204 12:03:13.974276    2996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:03:13.974391    2996 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:03:13.975556    2996 out.go:352] Setting JSON to false
	I1204 12:03:13.995059    2996 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1964,"bootTime":1733340629,"procs":570,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:03:13.995149    2996 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:03:13.998845    2996 out.go:177] * [ha-990000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:03:14.006828    2996 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:03:14.006858    2996 notify.go:220] Checking for updates...
	I1204 12:03:14.014824    2996 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:03:14.021888    2996 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:03:14.025836    2996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:03:14.029827    2996 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:03:14.032718    2996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:03:14.036998    2996 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:03:14.039843    2996 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 12:03:14.046817    2996 start.go:297] selected driver: qemu2
	I1204 12:03:14.046823    2996 start.go:901] validating driver "qemu2" against <nil>
	I1204 12:03:14.046828    2996 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:03:14.049879    2996 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 12:03:14.052831    2996 out.go:177] * Automatically selected the socket_vmnet network
	I1204 12:03:14.056884    2996 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 12:03:14.056901    2996 cni.go:84] Creating CNI manager for ""
	I1204 12:03:14.056922    2996 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 12:03:14.056926    2996 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 12:03:14.056963    2996 start.go:340] cluster config:
	{Name:ha-990000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:03:14.061781    2996 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:03:14.069656    2996 out.go:177] * Starting "ha-990000" primary control-plane node in "ha-990000" cluster
	I1204 12:03:14.073839    2996 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:03:14.073855    2996 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 12:03:14.073866    2996 cache.go:56] Caching tarball of preloaded images
	I1204 12:03:14.073971    2996 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:03:14.073976    2996 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 12:03:14.074195    2996 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/ha-990000/config.json ...
	I1204 12:03:14.074205    2996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/ha-990000/config.json: {Name:mkc814805085f78823997a8a4570704815dec340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:03:14.074704    2996 start.go:360] acquireMachinesLock for ha-990000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:03:14.074746    2996 start.go:364] duration metric: took 36.958µs to acquireMachinesLock for "ha-990000"
	I1204 12:03:14.074758    2996 start.go:93] Provisioning new machine with config: &{Name:ha-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:03:14.074793    2996 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:03:14.078854    2996 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 12:03:14.100773    2996 start.go:159] libmachine.API.Create for "ha-990000" (driver="qemu2")
	I1204 12:03:14.100804    2996 client.go:168] LocalClient.Create starting
	I1204 12:03:14.100886    2996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:03:14.100926    2996 main.go:141] libmachine: Decoding PEM data...
	I1204 12:03:14.100938    2996 main.go:141] libmachine: Parsing certificate...
	I1204 12:03:14.100976    2996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:03:14.101005    2996 main.go:141] libmachine: Decoding PEM data...
	I1204 12:03:14.101014    2996 main.go:141] libmachine: Parsing certificate...
	I1204 12:03:14.101377    2996 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:03:14.296935    2996 main.go:141] libmachine: Creating SSH key...
	I1204 12:03:14.477415    2996 main.go:141] libmachine: Creating Disk image...
	I1204 12:03:14.477422    2996 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:03:14.477667    2996 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2
	I1204 12:03:14.492013    2996 main.go:141] libmachine: STDOUT: 
	I1204 12:03:14.492029    2996 main.go:141] libmachine: STDERR: 
	I1204 12:03:14.492086    2996 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2 +20000M
	I1204 12:03:14.500627    2996 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:03:14.500645    2996 main.go:141] libmachine: STDERR: 
	I1204 12:03:14.500664    2996 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2
	I1204 12:03:14.500670    2996 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:03:14.500682    2996 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:03:14.500725    2996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:b0:5d:d8:ee:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2
	I1204 12:03:14.544773    2996 main.go:141] libmachine: STDOUT: 
	I1204 12:03:14.544792    2996 main.go:141] libmachine: STDERR: 
	I1204 12:03:14.544796    2996 main.go:141] libmachine: Attempt 0
	I1204 12:03:14.544823    2996 main.go:141] libmachine: Searching for 2a:b0:5d:d8:ee:c2 in /var/db/dhcpd_leases ...
	I1204 12:03:14.544925    2996 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1204 12:03:14.544941    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:03:14.544952    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:03:14.544958    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:03:14.544966    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:03:16.547094    2996 main.go:141] libmachine: Attempt 1
	I1204 12:03:16.547218    2996 main.go:141] libmachine: Searching for 2a:b0:5d:d8:ee:c2 in /var/db/dhcpd_leases ...
	I1204 12:03:16.547669    2996 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1204 12:03:16.547725    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:03:16.547785    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:03:16.547819    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:03:16.547852    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:03:18.550044    2996 main.go:141] libmachine: Attempt 2
	I1204 12:03:18.550191    2996 main.go:141] libmachine: Searching for 2a:b0:5d:d8:ee:c2 in /var/db/dhcpd_leases ...
	I1204 12:03:18.550585    2996 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1204 12:03:18.550644    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:03:18.550676    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:03:18.550747    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:03:18.550780    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:03:20.552944    2996 main.go:141] libmachine: Attempt 3
	I1204 12:03:20.552997    2996 main.go:141] libmachine: Searching for 2a:b0:5d:d8:ee:c2 in /var/db/dhcpd_leases ...
	I1204 12:03:20.553143    2996 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1204 12:03:20.553156    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:03:20.553165    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:03:20.553172    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:03:20.553178    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:03:22.555209    2996 main.go:141] libmachine: Attempt 4
	I1204 12:03:22.555230    2996 main.go:141] libmachine: Searching for 2a:b0:5d:d8:ee:c2 in /var/db/dhcpd_leases ...
	I1204 12:03:22.555305    2996 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1204 12:03:22.555328    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:03:22.555336    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:03:22.555341    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:03:22.555350    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:03:24.557370    2996 main.go:141] libmachine: Attempt 5
	I1204 12:03:24.557385    2996 main.go:141] libmachine: Searching for 2a:b0:5d:d8:ee:c2 in /var/db/dhcpd_leases ...
	I1204 12:03:24.557428    2996 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1204 12:03:24.557436    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:03:24.557442    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:03:24.557448    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:03:24.557454    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:03:26.559505    2996 main.go:141] libmachine: Attempt 6
	I1204 12:03:26.559526    2996 main.go:141] libmachine: Searching for 2a:b0:5d:d8:ee:c2 in /var/db/dhcpd_leases ...
	I1204 12:03:26.559624    2996 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1204 12:03:26.559635    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:03:26.559641    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:03:26.559646    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:03:26.559650    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:03:28.561692    2996 main.go:141] libmachine: Attempt 7
	I1204 12:03:28.561739    2996 main.go:141] libmachine: Searching for 2a:b0:5d:d8:ee:c2 in /var/db/dhcpd_leases ...
	I1204 12:03:28.561891    2996 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1204 12:03:28.561905    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2a:b0:5d:d8:ee:c2 ID:1,2a:b0:5d:d8:ee:c2 Lease:0x6750c39e}
	I1204 12:03:28.561909    2996 main.go:141] libmachine: Found match: 2a:b0:5d:d8:ee:c2
	I1204 12:03:28.561937    2996 main.go:141] libmachine: IP: 192.168.105.5
	I1204 12:03:28.561943    2996 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I1204 12:09:14.097348    2996 start.go:128] duration metric: took 6m0.028911292s to createHost
	I1204 12:09:14.097415    2996 start.go:83] releasing machines lock for "ha-990000", held for 6m0.02905925s
	W1204 12:09:14.097477    2996 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I1204 12:09:14.108314    2996 out.go:177] * Deleting "ha-990000" in qemu2 ...
	W1204 12:09:14.142100    2996 out.go:270] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1204 12:09:14.142136    2996 start.go:729] Will try again in 5 seconds ...
	I1204 12:09:19.144269    2996 start.go:360] acquireMachinesLock for ha-990000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:09:19.144600    2996 start.go:364] duration metric: took 257.292µs to acquireMachinesLock for "ha-990000"
	I1204 12:09:19.144670    2996 start.go:93] Provisioning new machine with config: &{Name:ha-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:09:19.144844    2996 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:09:19.148729    2996 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 12:09:19.195860    2996 start.go:159] libmachine.API.Create for "ha-990000" (driver="qemu2")
	I1204 12:09:19.195912    2996 client.go:168] LocalClient.Create starting
	I1204 12:09:19.196087    2996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:09:19.196168    2996 main.go:141] libmachine: Decoding PEM data...
	I1204 12:09:19.196188    2996 main.go:141] libmachine: Parsing certificate...
	I1204 12:09:19.196262    2996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:09:19.196325    2996 main.go:141] libmachine: Decoding PEM data...
	I1204 12:09:19.196341    2996 main.go:141] libmachine: Parsing certificate...
	I1204 12:09:19.199373    2996 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:09:19.375543    2996 main.go:141] libmachine: Creating SSH key...
	I1204 12:09:19.434829    2996 main.go:141] libmachine: Creating Disk image...
	I1204 12:09:19.434835    2996 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:09:19.435047    2996 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2
	I1204 12:09:19.445101    2996 main.go:141] libmachine: STDOUT: 
	I1204 12:09:19.445121    2996 main.go:141] libmachine: STDERR: 
	I1204 12:09:19.445184    2996 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2 +20000M
	I1204 12:09:19.453886    2996 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:09:19.453902    2996 main.go:141] libmachine: STDERR: 
	I1204 12:09:19.453916    2996 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2
	I1204 12:09:19.453921    2996 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:09:19.453931    2996 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:09:19.453970    2996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:09:28:33:2d:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2
	I1204 12:09:19.490923    2996 main.go:141] libmachine: STDOUT: 
	I1204 12:09:19.490956    2996 main.go:141] libmachine: STDERR: 
	I1204 12:09:19.490960    2996 main.go:141] libmachine: Attempt 0
	I1204 12:09:19.490991    2996 main.go:141] libmachine: Searching for ce:09:28:33:2d:71 in /var/db/dhcpd_leases ...
	I1204 12:09:19.491114    2996 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1204 12:09:19.491124    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2a:b0:5d:d8:ee:c2 ID:1,2a:b0:5d:d8:ee:c2 Lease:0x6750c39e}
	I1204 12:09:19.491131    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:09:19.491139    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:09:19.491145    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:09:19.491153    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:09:21.493322    2996 main.go:141] libmachine: Attempt 1
	I1204 12:09:21.493416    2996 main.go:141] libmachine: Searching for ce:09:28:33:2d:71 in /var/db/dhcpd_leases ...
	I1204 12:09:21.493870    2996 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1204 12:09:21.493960    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2a:b0:5d:d8:ee:c2 ID:1,2a:b0:5d:d8:ee:c2 Lease:0x6750c39e}
	I1204 12:09:21.493996    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:09:21.494030    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:09:21.494061    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:09:21.494092    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:09:23.496276    2996 main.go:141] libmachine: Attempt 2
	I1204 12:09:23.496418    2996 main.go:141] libmachine: Searching for ce:09:28:33:2d:71 in /var/db/dhcpd_leases ...
	I1204 12:09:23.496865    2996 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1204 12:09:23.496919    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2a:b0:5d:d8:ee:c2 ID:1,2a:b0:5d:d8:ee:c2 Lease:0x6750c39e}
	I1204 12:09:23.496948    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:09:23.496978    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:09:23.497013    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:09:23.497043    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:09:25.498784    2996 main.go:141] libmachine: Attempt 3
	I1204 12:09:25.498850    2996 main.go:141] libmachine: Searching for ce:09:28:33:2d:71 in /var/db/dhcpd_leases ...
	I1204 12:09:25.498991    2996 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1204 12:09:25.499012    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2a:b0:5d:d8:ee:c2 ID:1,2a:b0:5d:d8:ee:c2 Lease:0x6750c39e}
	I1204 12:09:25.499019    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:09:25.499026    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:09:25.499032    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:09:25.499043    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:09:27.501102    2996 main.go:141] libmachine: Attempt 4
	I1204 12:09:27.501127    2996 main.go:141] libmachine: Searching for ce:09:28:33:2d:71 in /var/db/dhcpd_leases ...
	I1204 12:09:27.501220    2996 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1204 12:09:27.501234    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2a:b0:5d:d8:ee:c2 ID:1,2a:b0:5d:d8:ee:c2 Lease:0x6750c39e}
	I1204 12:09:27.501240    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:09:27.501257    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:09:27.501263    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:09:27.501269    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:09:29.503291    2996 main.go:141] libmachine: Attempt 5
	I1204 12:09:29.503315    2996 main.go:141] libmachine: Searching for ce:09:28:33:2d:71 in /var/db/dhcpd_leases ...
	I1204 12:09:29.503369    2996 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1204 12:09:29.503380    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2a:b0:5d:d8:ee:c2 ID:1,2a:b0:5d:d8:ee:c2 Lease:0x6750c39e}
	I1204 12:09:29.503385    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:09:29.503411    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:09:29.503417    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:09:29.503422    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:09:31.505471    2996 main.go:141] libmachine: Attempt 6
	I1204 12:09:31.505496    2996 main.go:141] libmachine: Searching for ce:09:28:33:2d:71 in /var/db/dhcpd_leases ...
	I1204 12:09:31.505588    2996 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1204 12:09:31.505599    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:2a:b0:5d:d8:ee:c2 ID:1,2a:b0:5d:d8:ee:c2 Lease:0x6750c39e}
	I1204 12:09:31.505605    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:fe:f0:fa:81:27:cc ID:1,fe:f0:fa:81:27:cc Lease:0x6750c2df}
	I1204 12:09:31.505611    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:a2:a6:b9:28:17:ae ID:1,a2:a6:b9:28:17:ae Lease:0x6750b48d}
	I1204 12:09:31.505616    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:72:13:a5:db:df:15 ID:1,72:13:a5:db:df:15 Lease:0x6750b464}
	I1204 12:09:31.505622    2996 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x6750bbf6}
	I1204 12:09:33.507803    2996 main.go:141] libmachine: Attempt 7
	I1204 12:09:33.507888    2996 main.go:141] libmachine: Searching for ce:09:28:33:2d:71 in /var/db/dhcpd_leases ...
	I1204 12:09:33.508340    2996 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1204 12:09:33.508397    2996 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:ce:09:28:33:2d:71 ID:1,ce:9:28:33:2d:71 Lease:0x6750c50c}
	I1204 12:09:33.508416    2996 main.go:141] libmachine: Found match: ce:09:28:33:2d:71
	I1204 12:09:33.508455    2996 main.go:141] libmachine: IP: 192.168.105.6
	I1204 12:09:33.508478    2996 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I1204 12:15:19.191791    2996 start.go:128] duration metric: took 6m0.053301792s to createHost
	I1204 12:15:19.191874    2996 start.go:83] releasing machines lock for "ha-990000", held for 6m0.05365775s
	W1204 12:15:19.192139    2996 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-990000" may fix it: creating host: create host timed out in 360.000000 seconds
	* Failed to start qemu2 VM. Running "minikube delete -p ha-990000" may fix it: creating host: create host timed out in 360.000000 seconds
	I1204 12:15:19.200628    2996 out.go:201] 
	W1204 12:15:19.204760    2996 out.go:270] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	W1204 12:15:19.204823    2996 out.go:270] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1204 12:15:19.204865    2996 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1204 12:15:19.217697    2996 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-990000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000: exit status 7 (76.434958ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 12:15:19.308529    3130 status.go:393] failed to get driver ip: parsing IP: 
	E1204 12:15:19.308540    3130 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-990000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StartCluster (725.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (119.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (65.070291ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-990000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- rollout status deployment/busybox: exit status 1 (62.321625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (65.701334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:15:19.502955    1856 retry.go:31] will retry after 845.332622ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.716417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:15:20.458291    1856 retry.go:31] will retry after 1.160089269s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.930625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:15:21.728700    1856 retry.go:31] will retry after 2.983965939s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.897167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:15:24.822971    1856 retry.go:31] will retry after 4.319387905s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.369416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:15:29.253057    1856 retry.go:31] will retry after 4.409975867s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.9235ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:15:33.775369    1856 retry.go:31] will retry after 8.44893527s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.593459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:15:42.336169    1856 retry.go:31] will retry after 13.756271193s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1204 12:15:47.639177    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.557375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:15:56.204191    1856 retry.go:31] will retry after 16.450988048s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.661416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:16:12.765044    1856 retry.go:31] will retry after 23.039909528s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.1055ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:16:35.914224    1856 retry.go:31] will retry after 42.966071308s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1204 12:17:10.731527    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.520625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.146208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.279209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.71075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.765291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000: exit status 7 (35.099958ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 12:17:19.272127    3213 status.go:393] failed to get driver ip: parsing IP: 
	E1204 12:17:19.272133    3213 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-990000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DeployApp (119.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-990000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.293292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-990000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000: exit status 7 (35.016708ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 12:17:19.368740    3218 status.go:393] failed to get driver ip: parsing IP: 
	E1204 12:17:19.368749    3218 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-990000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-990000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-990000 -v=7 --alsologtostderr: exit status 50 (49.381208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:17:19.402215    3220 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:17:19.402495    3220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:17:19.402498    3220 out.go:358] Setting ErrFile to fd 2...
	I1204 12:17:19.402501    3220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:17:19.402635    3220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:17:19.402887    3220 mustload.go:65] Loading cluster: ha-990000
	I1204 12:17:19.403103    3220 config.go:182] Loaded profile config "ha-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:17:19.403770    3220 host.go:66] Checking if "ha-990000" exists ...
	I1204 12:17:19.407430    3220 out.go:201] 
	W1204 12:17:19.410285    3220 out.go:270] X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-990000 endpoint: failed to lookup ip for ""
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-990000 endpoint: failed to lookup ip for ""
	W1204 12:17:19.410298    3220 out.go:270] * Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	I1204 12:17:19.413322    3220 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-990000 -v=7 --alsologtostderr" : exit status 50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000: exit status 7 (37.336875ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 12:17:19.455602    3222 status.go:393] failed to get driver ip: parsing IP: 
	E1204 12:17:19.455609    3222 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-990000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-990000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-990000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.105792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-990000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-990000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-990000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000: exit status 7 (37.269708ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 12:17:19.522344    3225 status.go:393] failed to get driver ip: parsing IP: 
	E1204 12:17:19.522353    3225 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-990000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-990000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-990000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-990000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-990000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-990000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-990000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-990000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-990000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000: exit status 7 (34.913417ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 12:17:19.615365    3230 status.go:393] failed to get driver ip: parsing IP: 
	E1204 12:17:19.615374    3230 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-990000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-990000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-990000 node stop m02 -v=7 --alsologtostderr: exit status 85 (52.149875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:17:19.683581    3234 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:17:19.683891    3234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:17:19.683895    3234 out.go:358] Setting ErrFile to fd 2...
	I1204 12:17:19.683897    3234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:17:19.684024    3234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:17:19.684282    3234 mustload.go:65] Loading cluster: ha-990000
	I1204 12:17:19.684513    3234 config.go:182] Loaded profile config "ha-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:17:19.689198    3234 out.go:201] 
	W1204 12:17:19.692243    3234 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1204 12:17:19.692253    3234 out.go:270] * 
	* 
	W1204 12:17:19.693742    3234 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:17:19.698201    3234 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-990000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-990000 status -v=7 --alsologtostderr
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-990000 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-990000 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-990000 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-990000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000: exit status 7 (34.872917ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 12:17:19.772308    3238 status.go:393] failed to get driver ip: parsing IP: 
	E1204 12:17:19.772317    3238 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-990000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-990000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-990000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-990000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-990000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000: exit status 7 (34.469375ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 12:17:19.857774    3243 status.go:393] failed to get driver ip: parsing IP: 
	E1204 12:17:19.857784    3243 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-990000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-990000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-990000 node start m02 -v=7 --alsologtostderr: exit status 85 (52.075ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:17:19.891487    3245 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:17:19.891746    3245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:17:19.891750    3245 out.go:358] Setting ErrFile to fd 2...
	I1204 12:17:19.891752    3245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:17:19.891902    3245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:17:19.892147    3245 mustload.go:65] Loading cluster: ha-990000
	I1204 12:17:19.892349    3245 config.go:182] Loaded profile config "ha-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:17:19.897201    3245 out.go:201] 
	W1204 12:17:19.900182    3245 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1204 12:17:19.900192    3245 out.go:270] * 
	* 
	W1204 12:17:19.901623    3245 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:17:19.906177    3245 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1204 12:17:19.891487    3245 out.go:345] Setting OutFile to fd 1 ...
I1204 12:17:19.891746    3245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:17:19.891750    3245 out.go:358] Setting ErrFile to fd 2...
I1204 12:17:19.891752    3245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:17:19.891902    3245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
I1204 12:17:19.892147    3245 mustload.go:65] Loading cluster: ha-990000
I1204 12:17:19.892349    3245 config.go:182] Loaded profile config "ha-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:17:19.897201    3245 out.go:201] 
W1204 12:17:19.900182    3245 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1204 12:17:19.900192    3245 out.go:270] * 
* 
W1204 12:17:19.901623    3245 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1204 12:17:19.906177    3245 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-990000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-990000 status -v=7 --alsologtostderr
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-990000 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-990000 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-990000 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-990000 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
ha_test.go:450: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (34.094958ms)

                                                
                                                
** stderr ** 
	E1204 12:17:19.975954    3249 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1204 12:17:19.976439    3249 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1204 12:17:19.977741    3249 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1204 12:17:19.978163    3249 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1204 12:17:19.979333    3249 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	The connection to the server localhost:8080 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ha_test.go:452: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000: exit status 7 (34.49175ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 12:17:20.013755    3250 status.go:393] failed to get driver ip: parsing IP: 
	E1204 12:17:20.013765    3250 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-990000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-990000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-990000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-990000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-990000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-990000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-990000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-990000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-990000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000: exit status 7 (33.924542ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 12:17:20.101491    3255 status.go:393] failed to get driver ip: parsing IP: 
	E1204 12:17:20.101502    3255 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-990000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (953.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-990000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-990000 -v=7 --alsologtostderr
E1204 12:17:20.563426    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-990000 -v=7 --alsologtostderr: (6.588309s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-990000 --wait=true -v=7 --alsologtostderr
E1204 12:20:47.633314    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:22:20.556331    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:23:43.649747    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:25:47.626620    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:27:20.550970    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:30:47.627481    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:32:20.557235    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-990000 --wait=true -v=7 --alsologtostderr: signal: killed (15m47.20328025s)

                                                
                                                
-- stdout --
	* [ha-990000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-990000" primary control-plane node in "ha-990000" cluster
	* Restarting existing qemu2 VM for "ha-990000" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:17:26.790583    3280 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:17:26.790804    3280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:17:26.790808    3280 out.go:358] Setting ErrFile to fd 2...
	I1204 12:17:26.790811    3280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:17:26.790975    3280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:17:26.792235    3280 out.go:352] Setting JSON to false
	I1204 12:17:26.812291    3280 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2817,"bootTime":1733340629,"procs":572,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:17:26.812360    3280 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:17:26.816245    3280 out.go:177] * [ha-990000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:17:26.824240    3280 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:17:26.824320    3280 notify.go:220] Checking for updates...
	I1204 12:17:26.832095    3280 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:17:26.835096    3280 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:17:26.839154    3280 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:17:26.842183    3280 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:17:26.845200    3280 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:17:26.848519    3280 config.go:182] Loaded profile config "ha-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:17:26.848574    3280 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:17:26.853176    3280 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 12:17:26.860088    3280 start.go:297] selected driver: qemu2
	I1204 12:17:26.860095    3280 start.go:901] validating driver "qemu2" against &{Name:ha-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:17:26.860149    3280 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:17:26.862657    3280 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 12:17:26.862686    3280 cni.go:84] Creating CNI manager for ""
	I1204 12:17:26.862710    3280 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 12:17:26.862768    3280 start.go:340] cluster config:
	{Name:ha-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-990000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:17:26.867424    3280 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:17:26.875185    3280 out.go:177] * Starting "ha-990000" primary control-plane node in "ha-990000" cluster
	I1204 12:17:26.879204    3280 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:17:26.879223    3280 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 12:17:26.879235    3280 cache.go:56] Caching tarball of preloaded images
	I1204 12:17:26.879318    3280 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:17:26.879325    3280 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 12:17:26.879375    3280 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/ha-990000/config.json ...
	I1204 12:17:26.879893    3280 start.go:360] acquireMachinesLock for ha-990000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:17:26.879942    3280 start.go:364] duration metric: took 42.875µs to acquireMachinesLock for "ha-990000"
	I1204 12:17:26.879952    3280 start.go:96] Skipping create...Using existing machine configuration
	I1204 12:17:26.879956    3280 fix.go:54] fixHost starting: 
	I1204 12:17:26.880073    3280 fix.go:112] recreateIfNeeded on ha-990000: state=Stopped err=<nil>
	W1204 12:17:26.880082    3280 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 12:17:26.887156    3280 out.go:177] * Restarting existing qemu2 VM for "ha-990000" ...
	I1204 12:17:26.891241    3280 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:17:26.891325    3280 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:09:28:33:2d:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/ha-990000/disk.qcow2
	I1204 12:17:26.932536    3280 main.go:141] libmachine: STDOUT: 
	I1204 12:17:26.932567    3280 main.go:141] libmachine: STDERR: 
	I1204 12:17:26.932570    3280 main.go:141] libmachine: Attempt 0
	I1204 12:17:26.932592    3280 main.go:141] libmachine: Searching for ce:09:28:33:2d:71 in /var/db/dhcpd_leases ...
	I1204 12:17:26.932676    3280 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1204 12:17:26.932691    3280 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:ce:09:28:33:2d:71 ID:1,ce:9:28:33:2d:71 Lease:0x6750b8d3}
	I1204 12:17:26.932697    3280 main.go:141] libmachine: Found match: ce:09:28:33:2d:71
	I1204 12:17:26.932703    3280 main.go:141] libmachine: IP: 192.168.105.6
	I1204 12:17:26.932708    3280 main.go:141] libmachine: Waiting for VM to start (ssh -p 0 docker@192.168.105.6)...

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-990000 -v=7 --alsologtostderr" : signal: killed
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-990000
ha_test.go:474: (dbg) Non-zero exit: out/minikube-darwin-arm64 node list -p ha-990000: context deadline exceeded (417ns)
ha_test.go:476: failed to run node list. args "out/minikube-darwin-arm64 node list -p ha-990000" : context deadline exceeded
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-990000	

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-990000 -n ha-990000: exit status 7 (34.305541ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 12:33:13.956828    3892 status.go:393] failed to get driver ip: parsing IP: 
	E1204 12:33:13.956838    3892 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-990000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (953.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (725.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-377000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E1204 12:35:47.631246    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:37:20.553661    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:40:23.648779    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:40:47.625924    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:42:20.549424    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:45:47.621898    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-377000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 52 (12m5.254249041s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eaca9702-9f0e-4e0c-92fe-5688f16a4570","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-377000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a40ff9a7-7474-4ef5-ac09-dfe9c5bc1d1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19985"}}
	{"specversion":"1.0","id":"35bcb9cd-8cdc-4445-af25-944af4cb351e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig"}}
	{"specversion":"1.0","id":"f9ecce85-fc0e-4c44-b0c8-34d01d8b7ed9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"122211b8-ee23-4937-a9eb-b78a76a8f06e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4908d684-908a-48c9-8012-a10faf914744","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube"}}
	{"specversion":"1.0","id":"5fe36cc7-64c3-4abd-b652-a25789f499d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9a67498c-70a5-48f3-aa4f-74134067c76e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e3b4e90-ea29-46fb-aa7b-40683d7d96dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"16ecc1af-15d1-4f78-968d-590f58f4495a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-377000\" primary control-plane node in \"json-output-377000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa156160-a5bf-4327-99b9-2e767b142aa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"69a881ee-1a0e-41a9-a446-9deb02a140e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-377000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e0b93e1-2fc9-494b-95bd-cf415070909c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"1097f25a-acf4-41dd-b805-1d36be4b2916","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e9ab2c5-531a-410f-aabc-32e814c4b733","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-377000\" may fix it: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"25246ba8-5a2f-4a92-a6ab-e786bcf9c2e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try 'minikube delete', and disable any conflicting VPN or firewall software","exitcode":"52","issues":"https://github.com/kubernetes/minikube/issues/7072","message":"Failed to start host: creating host: create host timed out in 360.000000 seconds","name":"DRV_CREATE_TIMEOUT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-377000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 52
--- FAIL: TestJSONOutput/start/Command (725.26s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 9 has already been assigned to another step:
Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
Cannot use for:
Deleting "json-output-377000" in qemu2 ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: eaca9702-9f0e-4e0c-92fe-5688f16a4570
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-377000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a40ff9a7-7474-4ef5-ac09-dfe9c5bc1d1e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19985"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 35bcb9cd-8cdc-4445-af25-944af4cb351e
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f9ecce85-fc0e-4c44-b0c8-34d01d8b7ed9
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 122211b8-ee23-4937-a9eb-b78a76a8f06e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 4908d684-908a-48c9-8012-a10faf914744
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5fe36cc7-64c3-4abd-b652-a25789f499d2
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9a67498c-70a5-48f3-aa4f-74134067c76e
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3e3b4e90-ea29-46fb-aa7b-40683d7d96dd
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 16ecc1af-15d1-4f78-968d-590f58f4495a
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-377000\" primary control-plane node in \"json-output-377000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: aa156160-a5bf-4327-99b9-2e767b142aa2
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 69a881ee-1a0e-41a9-a446-9deb02a140e6
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-377000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 0e0b93e1-2fc9-494b-95bd-cf415070909c
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 1097f25a-acf4-41dd-b805-1d36be4b2916
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 8e9ab2c5-531a-410f-aabc-32e814c4b733
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-377000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 25246ba8-5a2f-4a92-a6ab-e786bcf9c2e1
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: eaca9702-9f0e-4e0c-92fe-5688f16a4570
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-377000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a40ff9a7-7474-4ef5-ac09-dfe9c5bc1d1e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19985"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 35bcb9cd-8cdc-4445-af25-944af4cb351e
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f9ecce85-fc0e-4c44-b0c8-34d01d8b7ed9
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 122211b8-ee23-4937-a9eb-b78a76a8f06e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 4908d684-908a-48c9-8012-a10faf914744
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5fe36cc7-64c3-4abd-b652-a25789f499d2
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9a67498c-70a5-48f3-aa4f-74134067c76e
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3e3b4e90-ea29-46fb-aa7b-40683d7d96dd
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 16ecc1af-15d1-4f78-968d-590f58f4495a
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-377000\" primary control-plane node in \"json-output-377000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: aa156160-a5bf-4327-99b9-2e767b142aa2
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 69a881ee-1a0e-41a9-a446-9deb02a140e6
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-377000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 0e0b93e1-2fc9-494b-95bd-cf415070909c
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 1097f25a-acf4-41dd-b805-1d36be4b2916
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 8e9ab2c5-531a-410f-aabc-32e814c4b733
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-377000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 25246ba8-5a2f-4a92-a6ab-e786bcf9c2e1
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-377000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-377000 --output=json --user=testUser: exit status 50 (88.415459ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c7b9d745-1cf4-4e28-b70e-14f2191dcf6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Recreate the cluster by running:\n\t\tminikube delete {{.profileArg}}\n\t\tminikube start {{.profileArg}}","exitcode":"50","issues":"","message":"Unable to get control-plane node json-output-377000 endpoint: failed to lookup ip for \"\"","name":"DRV_CP_ENDPOINT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-377000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-377000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-377000 --output=json --user=testUser: exit status 50 (59.779333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node json-output-377000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-377000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/unpause/Command (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-406000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E1204 12:47:20.545528    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-406000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.964768834s)

                                                
                                                
-- stdout --
	* [mount-start-1-406000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-406000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-406000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-406000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-406000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-406000 -n mount-start-1-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-406000 -n mount-start-1-406000: exit status 7 (72.478625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.04s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-729000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-729000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.811972833s)

                                                
                                                
-- stdout --
	* [multinode-729000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-729000" primary control-plane node in "multinode-729000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-729000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:47:24.082925    4462 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:47:24.083090    4462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:47:24.083094    4462 out.go:358] Setting ErrFile to fd 2...
	I1204 12:47:24.083096    4462 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:47:24.083218    4462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:47:24.084362    4462 out.go:352] Setting JSON to false
	I1204 12:47:24.102371    4462 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4615,"bootTime":1733340629,"procs":576,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:47:24.102453    4462 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:47:24.108722    4462 out.go:177] * [multinode-729000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:47:24.116913    4462 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:47:24.116961    4462 notify.go:220] Checking for updates...
	I1204 12:47:24.124790    4462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:47:24.127819    4462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:47:24.130756    4462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:47:24.133821    4462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:47:24.136825    4462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:47:24.138575    4462 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:47:24.142798    4462 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 12:47:24.149701    4462 start.go:297] selected driver: qemu2
	I1204 12:47:24.149707    4462 start.go:901] validating driver "qemu2" against <nil>
	I1204 12:47:24.149713    4462 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:47:24.152236    4462 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 12:47:24.154824    4462 out.go:177] * Automatically selected the socket_vmnet network
	I1204 12:47:24.157873    4462 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 12:47:24.157893    4462 cni.go:84] Creating CNI manager for ""
	I1204 12:47:24.157916    4462 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 12:47:24.157920    4462 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 12:47:24.157952    4462 start.go:340] cluster config:
	{Name:multinode-729000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-729000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:47:24.162631    4462 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:47:24.170817    4462 out.go:177] * Starting "multinode-729000" primary control-plane node in "multinode-729000" cluster
	I1204 12:47:24.174861    4462 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:47:24.174881    4462 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 12:47:24.174888    4462 cache.go:56] Caching tarball of preloaded images
	I1204 12:47:24.174978    4462 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:47:24.174984    4462 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 12:47:24.175205    4462 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/multinode-729000/config.json ...
	I1204 12:47:24.175216    4462 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/multinode-729000/config.json: {Name:mk4f9485fcd0aefed607699ce1f84f81271a1b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:47:24.175685    4462 start.go:360] acquireMachinesLock for multinode-729000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:47:24.175732    4462 start.go:364] duration metric: took 41.458µs to acquireMachinesLock for "multinode-729000"
	I1204 12:47:24.175745    4462 start.go:93] Provisioning new machine with config: &{Name:multinode-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-729000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:47:24.175776    4462 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:47:24.182854    4462 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 12:47:24.200109    4462 start.go:159] libmachine.API.Create for "multinode-729000" (driver="qemu2")
	I1204 12:47:24.200147    4462 client.go:168] LocalClient.Create starting
	I1204 12:47:24.200218    4462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:47:24.200254    4462 main.go:141] libmachine: Decoding PEM data...
	I1204 12:47:24.200264    4462 main.go:141] libmachine: Parsing certificate...
	I1204 12:47:24.200303    4462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:47:24.200332    4462 main.go:141] libmachine: Decoding PEM data...
	I1204 12:47:24.200339    4462 main.go:141] libmachine: Parsing certificate...
	I1204 12:47:24.200700    4462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:47:24.362403    4462 main.go:141] libmachine: Creating SSH key...
	I1204 12:47:24.394727    4462 main.go:141] libmachine: Creating Disk image...
	I1204 12:47:24.394732    4462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:47:24.394930    4462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2
	I1204 12:47:24.404912    4462 main.go:141] libmachine: STDOUT: 
	I1204 12:47:24.404926    4462 main.go:141] libmachine: STDERR: 
	I1204 12:47:24.404975    4462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2 +20000M
	I1204 12:47:24.413437    4462 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:47:24.413458    4462 main.go:141] libmachine: STDERR: 
	I1204 12:47:24.413480    4462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2
	I1204 12:47:24.413485    4462 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:47:24.413496    4462 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:47:24.413529    4462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:75:1d:77:db:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2
	I1204 12:47:24.415420    4462 main.go:141] libmachine: STDOUT: 
	I1204 12:47:24.415433    4462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:47:24.415450    4462 client.go:171] duration metric: took 215.299709ms to LocalClient.Create
	I1204 12:47:26.417585    4462 start.go:128] duration metric: took 2.241819125s to createHost
	I1204 12:47:26.417651    4462 start.go:83] releasing machines lock for "multinode-729000", held for 2.241940042s
	W1204 12:47:26.417714    4462 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:47:26.434976    4462 out.go:177] * Deleting "multinode-729000" in qemu2 ...
	W1204 12:47:26.461827    4462 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:47:26.461858    4462 start.go:729] Will try again in 5 seconds ...
	I1204 12:47:31.463977    4462 start.go:360] acquireMachinesLock for multinode-729000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:47:31.464529    4462 start.go:364] duration metric: took 460.25µs to acquireMachinesLock for "multinode-729000"
	I1204 12:47:31.464672    4462 start.go:93] Provisioning new machine with config: &{Name:multinode-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-729000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:47:31.464975    4462 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:47:31.481951    4462 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 12:47:31.531255    4462 start.go:159] libmachine.API.Create for "multinode-729000" (driver="qemu2")
	I1204 12:47:31.531302    4462 client.go:168] LocalClient.Create starting
	I1204 12:47:31.531431    4462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:47:31.531516    4462 main.go:141] libmachine: Decoding PEM data...
	I1204 12:47:31.531533    4462 main.go:141] libmachine: Parsing certificate...
	I1204 12:47:31.531601    4462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:47:31.531658    4462 main.go:141] libmachine: Decoding PEM data...
	I1204 12:47:31.531672    4462 main.go:141] libmachine: Parsing certificate...
	I1204 12:47:31.532626    4462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:47:31.702591    4462 main.go:141] libmachine: Creating SSH key...
	I1204 12:47:31.795676    4462 main.go:141] libmachine: Creating Disk image...
	I1204 12:47:31.795682    4462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:47:31.795903    4462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2
	I1204 12:47:31.805735    4462 main.go:141] libmachine: STDOUT: 
	I1204 12:47:31.805757    4462 main.go:141] libmachine: STDERR: 
	I1204 12:47:31.805833    4462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2 +20000M
	I1204 12:47:31.814573    4462 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:47:31.814589    4462 main.go:141] libmachine: STDERR: 
	I1204 12:47:31.814600    4462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2
	I1204 12:47:31.814607    4462 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:47:31.814617    4462 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:47:31.814654    4462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:8a:b6:3b:02:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2
	I1204 12:47:31.816413    4462 main.go:141] libmachine: STDOUT: 
	I1204 12:47:31.816426    4462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:47:31.816438    4462 client.go:171] duration metric: took 285.135708ms to LocalClient.Create
	I1204 12:47:33.818617    4462 start.go:128] duration metric: took 2.353639167s to createHost
	I1204 12:47:33.818960    4462 start.go:83] releasing machines lock for "multinode-729000", held for 2.354194291s
	W1204 12:47:33.819377    4462 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-729000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-729000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:47:33.833010    4462 out.go:201] 
	W1204 12:47:33.838066    4462 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:47:33.838097    4462 out.go:270] * 
	* 
	W1204 12:47:33.840659    4462 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:47:33.849964    4462 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-729000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (72.416833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (87.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (129.466541ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-729000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- rollout status deployment/busybox: exit status 1 (63.272666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.323041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:47:34.192494    1856 retry.go:31] will retry after 1.143182386s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.980792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:47:35.447014    1856 retry.go:31] will retry after 1.822949316s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.215125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:47:37.379493    1856 retry.go:31] will retry after 1.455960702s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.86925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:47:38.946780    1856 retry.go:31] will retry after 2.529833431s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.681125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:47:41.587651    1856 retry.go:31] will retry after 7.100542929s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.326417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:47:48.798917    1856 retry.go:31] will retry after 8.024975318s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.002667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:47:56.930180    1856 retry.go:31] will retry after 12.672237066s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.65825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:48:09.712426    1856 retry.go:31] will retry after 23.807792952s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.743875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 12:48:33.632117    1856 retry.go:31] will retry after 27.297785603s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.636041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.296667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.841875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.052375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.514875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (33.853833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (87.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-729000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (60.666333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (33.736791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-729000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-729000 -v 3 --alsologtostderr: exit status 83 (45.593375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-729000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-729000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:01.448259    4574 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:01.448679    4574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:01.448684    4574 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:01.448686    4574 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:01.448870    4574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:01.449141    4574 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:01.449372    4574 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:01.453366    4574 out.go:177] * The control-plane node multinode-729000 host is not running: state=Stopped
	I1204 12:49:01.456386    4574 out.go:177]   To start a cluster, run: "minikube start -p multinode-729000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-729000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (33.5775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-729000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-729000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (31.790916ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-729000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-729000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-729000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (34.274792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-729000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-729000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-729000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-729000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (34.024959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status --output json --alsologtostderr: exit status 7 (34.496625ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-729000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:01.681271    4586 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:01.681459    4586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:01.681463    4586 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:01.681466    4586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:01.681611    4586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:01.681737    4586 out.go:352] Setting JSON to true
	I1204 12:49:01.681749    4586 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:01.681815    4586 notify.go:220] Checking for updates...
	I1204 12:49:01.681984    4586 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:01.681993    4586 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:01.682250    4586 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:01.682254    4586 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:01.682256    4586 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-729000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (34.28125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 node stop m03: exit status 85 (48.96625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-729000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status: exit status 7 (33.970166ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status --alsologtostderr: exit status 7 (34.142042ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:01.833687    4594 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:01.833867    4594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:01.833871    4594 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:01.833873    4594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:01.833999    4594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:01.834111    4594 out.go:352] Setting JSON to false
	I1204 12:49:01.834126    4594 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:01.834172    4594 notify.go:220] Checking for updates...
	I1204 12:49:01.834315    4594 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:01.834330    4594 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:01.834569    4594 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:01.834573    4594 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:01.834575    4594 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-729000 status --alsologtostderr": multinode-729000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (33.411875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.975833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:01.901011    4598 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:01.901306    4598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:01.901310    4598 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:01.901312    4598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:01.901457    4598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:01.901715    4598 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:01.901914    4598 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:01.906376    4598 out.go:201] 
	W1204 12:49:01.909236    4598 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1204 12:49:01.909247    4598 out.go:270] * 
	* 
	W1204 12:49:01.910752    4598 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:49:01.914284    4598 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1204 12:49:01.901011    4598 out.go:345] Setting OutFile to fd 1 ...
I1204 12:49:01.901306    4598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:49:01.901310    4598 out.go:358] Setting ErrFile to fd 2...
I1204 12:49:01.901312    4598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:49:01.901457    4598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
I1204 12:49:01.901715    4598 mustload.go:65] Loading cluster: multinode-729000
I1204 12:49:01.901914    4598 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:49:01.906376    4598 out.go:201] 
W1204 12:49:01.909236    4598 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1204 12:49:01.909247    4598 out.go:270] * 
* 
W1204 12:49:01.910752    4598 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1204 12:49:01.914284    4598 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-729000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr: exit status 7 (34.213375ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:01.951743    4600 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:01.951918    4600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:01.951921    4600 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:01.951923    4600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:01.952043    4600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:01.952180    4600 out.go:352] Setting JSON to false
	I1204 12:49:01.952206    4600 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:01.952241    4600 notify.go:220] Checking for updates...
	I1204 12:49:01.952433    4600 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:01.952441    4600 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:01.952686    4600 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:01.952690    4600 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:01.952692    4600 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 12:49:01.953520    1856 retry.go:31] will retry after 1.338687773s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr: exit status 7 (78.635083ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:03.371049    4602 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:03.371257    4602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:03.371261    4602 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:03.371264    4602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:03.371414    4602 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:03.371575    4602 out.go:352] Setting JSON to false
	I1204 12:49:03.371598    4602 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:03.371626    4602 notify.go:220] Checking for updates...
	I1204 12:49:03.371840    4602 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:03.371850    4602 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:03.372154    4602 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:03.372159    4602 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:03.372161    4602 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 12:49:03.373159    1856 retry.go:31] will retry after 1.869975434s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr: exit status 7 (77.375208ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:05.320760    4604 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:05.320971    4604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:05.320976    4604 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:05.320979    4604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:05.321136    4604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:05.321293    4604 out.go:352] Setting JSON to false
	I1204 12:49:05.321308    4604 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:05.321348    4604 notify.go:220] Checking for updates...
	I1204 12:49:05.321573    4604 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:05.321582    4604 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:05.321905    4604 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:05.321910    4604 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:05.321913    4604 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 12:49:05.322987    1856 retry.go:31] will retry after 2.470458997s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr: exit status 7 (81.580792ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:07.874969    4606 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:07.875233    4606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:07.875237    4606 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:07.875240    4606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:07.875404    4606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:07.875591    4606 out.go:352] Setting JSON to false
	I1204 12:49:07.875606    4606 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:07.875635    4606 notify.go:220] Checking for updates...
	I1204 12:49:07.875878    4606 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:07.875888    4606 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:07.876216    4606 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:07.876221    4606 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:07.876223    4606 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 12:49:07.877355    1856 retry.go:31] will retry after 2.738180938s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr: exit status 7 (78.728542ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:10.694414    4608 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:10.694641    4608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:10.694645    4608 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:10.694648    4608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:10.694809    4608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:10.694954    4608 out.go:352] Setting JSON to false
	I1204 12:49:10.694967    4608 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:10.694999    4608 notify.go:220] Checking for updates...
	I1204 12:49:10.695232    4608 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:10.695242    4608 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:10.695531    4608 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:10.695535    4608 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:10.695538    4608 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 12:49:10.696552    1856 retry.go:31] will retry after 7.268508691s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr: exit status 7 (78.311375ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:18.043622    4615 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:18.043820    4615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:18.043824    4615 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:18.043827    4615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:18.043986    4615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:18.044138    4615 out.go:352] Setting JSON to false
	I1204 12:49:18.044152    4615 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:18.044199    4615 notify.go:220] Checking for updates...
	I1204 12:49:18.044421    4615 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:18.044435    4615 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:18.044727    4615 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:18.044731    4615 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:18.044734    4615 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 12:49:18.045734    1856 retry.go:31] will retry after 8.076719265s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr: exit status 7 (79.371875ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:26.201958    4621 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:26.202190    4621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:26.202194    4621 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:26.202197    4621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:26.202354    4621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:26.202522    4621 out.go:352] Setting JSON to false
	I1204 12:49:26.202536    4621 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:26.202575    4621 notify.go:220] Checking for updates...
	I1204 12:49:26.202804    4621 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:26.202813    4621 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:26.203121    4621 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:26.203126    4621 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:26.203128    4621 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 12:49:26.204156    1856 retry.go:31] will retry after 14.762556476s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr: exit status 7 (75.910459ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:41.042511    4634 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:41.042775    4634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:41.042780    4634 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:41.042783    4634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:41.042957    4634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:41.043135    4634 out.go:352] Setting JSON to false
	I1204 12:49:41.043150    4634 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:41.043195    4634 notify.go:220] Checking for updates...
	I1204 12:49:41.043424    4634 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:41.043434    4634 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:41.043801    4634 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:41.043806    4634 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:41.043809    4634 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-729000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (35.437417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (39.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-729000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-729000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-729000: (3.306838875s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-729000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-729000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.231620291s)

                                                
                                                
-- stdout --
	* [multinode-729000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-729000" primary control-plane node in "multinode-729000" cluster
	* Restarting existing qemu2 VM for "multinode-729000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-729000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:44.489380    4667 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:44.489622    4667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:44.489626    4667 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:44.489629    4667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:44.489824    4667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:44.491103    4667 out.go:352] Setting JSON to false
	I1204 12:49:44.511124    4667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4755,"bootTime":1733340629,"procs":584,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:49:44.511202    4667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:49:44.516093    4667 out.go:177] * [multinode-729000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:49:44.523057    4667 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:49:44.523097    4667 notify.go:220] Checking for updates...
	I1204 12:49:44.530040    4667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:49:44.533078    4667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:49:44.535971    4667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:49:44.540038    4667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:49:44.543026    4667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:49:44.546350    4667 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:44.546417    4667 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:49:44.549987    4667 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 12:49:44.557002    4667 start.go:297] selected driver: qemu2
	I1204 12:49:44.557009    4667 start.go:901] validating driver "qemu2" against &{Name:multinode-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-729000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:49:44.557072    4667 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:49:44.559605    4667 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 12:49:44.559629    4667 cni.go:84] Creating CNI manager for ""
	I1204 12:49:44.559657    4667 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 12:49:44.559705    4667 start.go:340] cluster config:
	{Name:multinode-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-729000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:49:44.564142    4667 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:49:44.571947    4667 out.go:177] * Starting "multinode-729000" primary control-plane node in "multinode-729000" cluster
	I1204 12:49:44.575997    4667 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:49:44.576011    4667 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 12:49:44.576018    4667 cache.go:56] Caching tarball of preloaded images
	I1204 12:49:44.576086    4667 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:49:44.576092    4667 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 12:49:44.576140    4667 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/multinode-729000/config.json ...
	I1204 12:49:44.576708    4667 start.go:360] acquireMachinesLock for multinode-729000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:49:44.576756    4667 start.go:364] duration metric: took 42µs to acquireMachinesLock for "multinode-729000"
	I1204 12:49:44.576765    4667 start.go:96] Skipping create...Using existing machine configuration
	I1204 12:49:44.576769    4667 fix.go:54] fixHost starting: 
	I1204 12:49:44.576886    4667 fix.go:112] recreateIfNeeded on multinode-729000: state=Stopped err=<nil>
	W1204 12:49:44.576894    4667 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 12:49:44.580018    4667 out.go:177] * Restarting existing qemu2 VM for "multinode-729000" ...
	I1204 12:49:44.588052    4667 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:49:44.588101    4667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:8a:b6:3b:02:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2
	I1204 12:49:44.590608    4667 main.go:141] libmachine: STDOUT: 
	I1204 12:49:44.590640    4667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:49:44.590670    4667 fix.go:56] duration metric: took 13.899042ms for fixHost
	I1204 12:49:44.590675    4667 start.go:83] releasing machines lock for "multinode-729000", held for 13.914834ms
	W1204 12:49:44.590682    4667 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:49:44.590729    4667 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:49:44.590734    4667 start.go:729] Will try again in 5 seconds ...
	I1204 12:49:49.592865    4667 start.go:360] acquireMachinesLock for multinode-729000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:49:49.593257    4667 start.go:364] duration metric: took 294.625µs to acquireMachinesLock for "multinode-729000"
	I1204 12:49:49.593396    4667 start.go:96] Skipping create...Using existing machine configuration
	I1204 12:49:49.593415    4667 fix.go:54] fixHost starting: 
	I1204 12:49:49.594243    4667 fix.go:112] recreateIfNeeded on multinode-729000: state=Stopped err=<nil>
	W1204 12:49:49.594271    4667 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 12:49:49.602584    4667 out.go:177] * Restarting existing qemu2 VM for "multinode-729000" ...
	I1204 12:49:49.607676    4667 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:49:49.607944    4667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:8a:b6:3b:02:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2
	I1204 12:49:49.618333    4667 main.go:141] libmachine: STDOUT: 
	I1204 12:49:49.618381    4667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:49:49.618456    4667 fix.go:56] duration metric: took 25.043334ms for fixHost
	I1204 12:49:49.618471    4667 start.go:83] releasing machines lock for "multinode-729000", held for 25.192375ms
	W1204 12:49:49.618687    4667 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-729000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-729000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:49:49.625717    4667 out.go:201] 
	W1204 12:49:49.629739    4667 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:49:49.629768    4667 out.go:270] * 
	* 
	W1204 12:49:49.632300    4667 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:49:49.640601    4667 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-729000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-729000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (35.789459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 node delete m03: exit status 83 (43.529834ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-729000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-729000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-729000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status --alsologtostderr: exit status 7 (33.780125ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:49.837719    4681 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:49.837911    4681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:49.837914    4681 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:49.837916    4681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:49.838057    4681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:49.838180    4681 out.go:352] Setting JSON to false
	I1204 12:49:49.838193    4681 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:49.838232    4681 notify.go:220] Checking for updates...
	I1204 12:49:49.838404    4681 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:49.838412    4681 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:49.838652    4681 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:49.838656    4681 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:49.838658    4681 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-729000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (33.614541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-729000 stop: (3.599520458s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status: exit status 7 (69.724583ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-729000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-729000 status --alsologtostderr: exit status 7 (35.340292ms)

                                                
                                                
-- stdout --
	multinode-729000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:53.576534    4707 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:53.576700    4707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:53.576703    4707 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:53.576705    4707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:53.576828    4707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:53.576951    4707 out.go:352] Setting JSON to false
	I1204 12:49:53.576963    4707 mustload.go:65] Loading cluster: multinode-729000
	I1204 12:49:53.577017    4707 notify.go:220] Checking for updates...
	I1204 12:49:53.577169    4707 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:53.577181    4707 status.go:174] checking status of multinode-729000 ...
	I1204 12:49:53.577418    4707 status.go:371] multinode-729000 host status = "Stopped" (err=<nil>)
	I1204 12:49:53.577421    4707 status.go:384] host is not running, skipping remaining checks
	I1204 12:49:53.577423    4707 status.go:176] multinode-729000 status: &{Name:multinode-729000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-729000 status --alsologtostderr": multinode-729000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-729000 status --alsologtostderr": multinode-729000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (34.318834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-729000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-729000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.196616334s)

                                                
                                                
-- stdout --
	* [multinode-729000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-729000" primary control-plane node in "multinode-729000" cluster
	* Restarting existing qemu2 VM for "multinode-729000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-729000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:49:53.644945    4711 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:49:53.645122    4711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:53.645125    4711 out.go:358] Setting ErrFile to fd 2...
	I1204 12:49:53.645128    4711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:49:53.645265    4711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:49:53.646343    4711 out.go:352] Setting JSON to false
	I1204 12:49:53.664127    4711 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4764,"bootTime":1733340629,"procs":580,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:49:53.664227    4711 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:49:53.669860    4711 out.go:177] * [multinode-729000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:49:53.676746    4711 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:49:53.676804    4711 notify.go:220] Checking for updates...
	I1204 12:49:53.684709    4711 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:49:53.687698    4711 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:49:53.690721    4711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:49:53.693700    4711 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:49:53.696732    4711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:49:53.700007    4711 config.go:182] Loaded profile config "multinode-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:49:53.700287    4711 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:49:53.704660    4711 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 12:49:53.711768    4711 start.go:297] selected driver: qemu2
	I1204 12:49:53.711783    4711 start.go:901] validating driver "qemu2" against &{Name:multinode-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-729000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:49:53.711836    4711 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:49:53.714456    4711 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 12:49:53.714484    4711 cni.go:84] Creating CNI manager for ""
	I1204 12:49:53.714503    4711 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 12:49:53.714548    4711 start.go:340] cluster config:
	{Name:multinode-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-729000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:49:53.719167    4711 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:49:53.727860    4711 out.go:177] * Starting "multinode-729000" primary control-plane node in "multinode-729000" cluster
	I1204 12:49:53.731700    4711 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:49:53.731713    4711 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 12:49:53.731719    4711 cache.go:56] Caching tarball of preloaded images
	I1204 12:49:53.731772    4711 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:49:53.731777    4711 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 12:49:53.731826    4711 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/multinode-729000/config.json ...
	I1204 12:49:53.732370    4711 start.go:360] acquireMachinesLock for multinode-729000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:49:53.732403    4711 start.go:364] duration metric: took 26.834µs to acquireMachinesLock for "multinode-729000"
	I1204 12:49:53.732413    4711 start.go:96] Skipping create...Using existing machine configuration
	I1204 12:49:53.732417    4711 fix.go:54] fixHost starting: 
	I1204 12:49:53.732542    4711 fix.go:112] recreateIfNeeded on multinode-729000: state=Stopped err=<nil>
	W1204 12:49:53.732551    4711 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 12:49:53.735801    4711 out.go:177] * Restarting existing qemu2 VM for "multinode-729000" ...
	I1204 12:49:53.743689    4711 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:49:53.743735    4711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:8a:b6:3b:02:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2
	I1204 12:49:53.746150    4711 main.go:141] libmachine: STDOUT: 
	I1204 12:49:53.746170    4711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:49:53.746201    4711 fix.go:56] duration metric: took 13.782083ms for fixHost
	I1204 12:49:53.746206    4711 start.go:83] releasing machines lock for "multinode-729000", held for 13.798417ms
	W1204 12:49:53.746213    4711 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:49:53.746260    4711 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:49:53.746265    4711 start.go:729] Will try again in 5 seconds ...
	I1204 12:49:58.748446    4711 start.go:360] acquireMachinesLock for multinode-729000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:49:58.749057    4711 start.go:364] duration metric: took 511.5µs to acquireMachinesLock for "multinode-729000"
	I1204 12:49:58.749209    4711 start.go:96] Skipping create...Using existing machine configuration
	I1204 12:49:58.749230    4711 fix.go:54] fixHost starting: 
	I1204 12:49:58.750032    4711 fix.go:112] recreateIfNeeded on multinode-729000: state=Stopped err=<nil>
	W1204 12:49:58.750059    4711 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 12:49:58.758528    4711 out.go:177] * Restarting existing qemu2 VM for "multinode-729000" ...
	I1204 12:49:58.763525    4711 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:49:58.763771    4711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:8a:b6:3b:02:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/multinode-729000/disk.qcow2
	I1204 12:49:58.774387    4711 main.go:141] libmachine: STDOUT: 
	I1204 12:49:58.774470    4711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:49:58.774579    4711 fix.go:56] duration metric: took 25.347958ms for fixHost
	I1204 12:49:58.774600    4711 start.go:83] releasing machines lock for "multinode-729000", held for 25.51725ms
	W1204 12:49:58.774814    4711 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-729000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-729000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:49:58.783504    4711 out.go:201] 
	W1204 12:49:58.787561    4711 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:49:58.787590    4711 out.go:270] * 
	* 
	W1204 12:49:58.789953    4711 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:49:58.796562    4711 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-729000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (70.666125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-729000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-729000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-729000-m01 --driver=qemu2 : exit status 80 (9.93392575s)

                                                
                                                
-- stdout --
	* [multinode-729000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-729000-m01" primary control-plane node in "multinode-729000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-729000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-729000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-729000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-729000-m02 --driver=qemu2 : exit status 80 (9.905693084s)

                                                
                                                
-- stdout --
	* [multinode-729000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-729000-m02" primary control-plane node in "multinode-729000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-729000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-729000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-729000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-729000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-729000: exit status 83 (85.541375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-729000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-729000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-729000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-729000 -n multinode-729000: exit status 7 (34.219625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-729000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                    
x
+
TestPreload (10.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-500000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-500000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.913623084s)

                                                
                                                
-- stdout --
	* [test-preload-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-500000" primary control-plane node in "test-preload-500000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-500000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:50:19.112654    4765 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:50:19.112803    4765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:50:19.112806    4765 out.go:358] Setting ErrFile to fd 2...
	I1204 12:50:19.112808    4765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:50:19.112937    4765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:50:19.114129    4765 out.go:352] Setting JSON to false
	I1204 12:50:19.132008    4765 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4790,"bootTime":1733340629,"procs":573,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:50:19.132087    4765 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:50:19.138339    4765 out.go:177] * [test-preload-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:50:19.146496    4765 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:50:19.146545    4765 notify.go:220] Checking for updates...
	I1204 12:50:19.158399    4765 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:50:19.161452    4765 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:50:19.165393    4765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:50:19.168403    4765 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:50:19.171481    4765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:50:19.174737    4765 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:50:19.174789    4765 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:50:19.179412    4765 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 12:50:19.186385    4765 start.go:297] selected driver: qemu2
	I1204 12:50:19.186391    4765 start.go:901] validating driver "qemu2" against <nil>
	I1204 12:50:19.186398    4765 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:50:19.188957    4765 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 12:50:19.191391    4765 out.go:177] * Automatically selected the socket_vmnet network
	I1204 12:50:19.194441    4765 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 12:50:19.194467    4765 cni.go:84] Creating CNI manager for ""
	I1204 12:50:19.194488    4765 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:50:19.194492    4765 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 12:50:19.194522    4765 start.go:340] cluster config:
	{Name:test-preload-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:50:19.199221    4765 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:50:19.207396    4765 out.go:177] * Starting "test-preload-500000" primary control-plane node in "test-preload-500000" cluster
	I1204 12:50:19.211293    4765 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1204 12:50:19.211373    4765 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/test-preload-500000/config.json ...
	I1204 12:50:19.211402    4765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/test-preload-500000/config.json: {Name:mk426d751fe5ccb5350ee9562ad6534467f11d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:50:19.211405    4765 cache.go:107] acquiring lock: {Name:mkfae1a850c0b8be98a72c0bb9f0357ec8a2db46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:50:19.211434    4765 cache.go:107] acquiring lock: {Name:mkaed9e0c367705223dd5fcd6d80b5bf556cfcdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:50:19.211462    4765 cache.go:107] acquiring lock: {Name:mkaee2672e0c144e4d0a6a702c0f8a587e68e5dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:50:19.211532    4765 cache.go:107] acquiring lock: {Name:mk34f87c1a801b7b524d07135d4ba91d3d9ee3f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:50:19.211618    4765 cache.go:107] acquiring lock: {Name:mk673d02c357235efbcaa8f45894a9f4873f2b0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:50:19.211626    4765 cache.go:107] acquiring lock: {Name:mk4502dae9b6619e1395e4cb73175c05912c1870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:50:19.211647    4765 cache.go:107] acquiring lock: {Name:mk7c34eae5734beb114f60ad6e12f26c9f395f6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:50:19.211673    4765 cache.go:107] acquiring lock: {Name:mke66d9cc477839c30ad88d76d5e8a800546d7db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:50:19.211993    4765 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1204 12:50:19.212046    4765 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1204 12:50:19.212137    4765 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1204 12:50:19.212183    4765 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1204 12:50:19.212191    4765 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1204 12:50:19.212191    4765 start.go:360] acquireMachinesLock for test-preload-500000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:50:19.212219    4765 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:50:19.212247    4765 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1204 12:50:19.212288    4765 start.go:364] duration metric: took 80.041µs to acquireMachinesLock for "test-preload-500000"
	I1204 12:50:19.212308    4765 start.go:93] Provisioning new machine with config: &{Name:test-preload-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:50:19.212371    4765 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:50:19.212383    4765 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:50:19.219374    4765 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 12:50:19.224389    4765 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1204 12:50:19.224397    4765 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1204 12:50:19.224443    4765 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1204 12:50:19.224481    4765 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1204 12:50:19.226529    4765 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:50:19.226728    4765 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:50:19.226731    4765 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1204 12:50:19.226811    4765 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1204 12:50:19.238396    4765 start.go:159] libmachine.API.Create for "test-preload-500000" (driver="qemu2")
	I1204 12:50:19.238425    4765 client.go:168] LocalClient.Create starting
	I1204 12:50:19.238497    4765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:50:19.238536    4765 main.go:141] libmachine: Decoding PEM data...
	I1204 12:50:19.238546    4765 main.go:141] libmachine: Parsing certificate...
	I1204 12:50:19.238582    4765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:50:19.238614    4765 main.go:141] libmachine: Decoding PEM data...
	I1204 12:50:19.238623    4765 main.go:141] libmachine: Parsing certificate...
	I1204 12:50:19.239011    4765 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:50:19.401541    4765 main.go:141] libmachine: Creating SSH key...
	I1204 12:50:19.532125    4765 main.go:141] libmachine: Creating Disk image...
	I1204 12:50:19.532162    4765 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:50:19.532438    4765 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2
	I1204 12:50:19.543294    4765 main.go:141] libmachine: STDOUT: 
	I1204 12:50:19.543324    4765 main.go:141] libmachine: STDERR: 
	I1204 12:50:19.543397    4765 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2 +20000M
	I1204 12:50:19.552261    4765 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:50:19.552290    4765 main.go:141] libmachine: STDERR: 
	I1204 12:50:19.552304    4765 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2
	I1204 12:50:19.552308    4765 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:50:19.552320    4765 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:50:19.552348    4765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:c7:11:c3:73:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2
	I1204 12:50:19.554151    4765 main.go:141] libmachine: STDOUT: 
	I1204 12:50:19.554164    4765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:50:19.554183    4765 client.go:171] duration metric: took 315.757666ms to LocalClient.Create
	I1204 12:50:19.788198    4765 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1204 12:50:19.806912    4765 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1204 12:50:19.809212    4765 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1204 12:50:19.962324    4765 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1204 12:50:20.019626    4765 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1204 12:50:20.019664    4765 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1204 12:50:20.073677    4765 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1204 12:50:20.117322    4765 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1204 12:50:20.222769    4765 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1204 12:50:20.222816    4765 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.011398792s
	I1204 12:50:20.222863    4765 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1204 12:50:20.454603    4765 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1204 12:50:20.454696    4765 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 12:50:20.910416    4765 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1204 12:50:20.910466    4765 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.698968583s
	I1204 12:50:20.910530    4765 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1204 12:50:21.554467    4765 start.go:128] duration metric: took 2.342101708s to createHost
	I1204 12:50:21.554525    4765 start.go:83] releasing machines lock for "test-preload-500000", held for 2.342256875s
	W1204 12:50:21.554593    4765 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:50:21.570995    4765 out.go:177] * Deleting "test-preload-500000" in qemu2 ...
	W1204 12:50:21.603428    4765 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:50:21.603457    4765 start.go:729] Will try again in 5 seconds ...
	I1204 12:50:22.764159    4765 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1204 12:50:22.764224    4765 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.552719917s
	I1204 12:50:22.764260    4765 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1204 12:50:22.778264    4765 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1204 12:50:22.778307    4765 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.566923209s
	I1204 12:50:22.778329    4765 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1204 12:50:23.914258    4765 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1204 12:50:23.914316    4765 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.702757042s
	I1204 12:50:23.914346    4765 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1204 12:50:24.671024    4765 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1204 12:50:24.671063    4765 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.459533125s
	I1204 12:50:24.671087    4765 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1204 12:50:26.142002    4765 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1204 12:50:26.142059    4765 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.930757792s
	I1204 12:50:26.142086    4765 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1204 12:50:26.603593    4765 start.go:360] acquireMachinesLock for test-preload-500000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:50:26.604082    4765 start.go:364] duration metric: took 411.084µs to acquireMachinesLock for "test-preload-500000"
	I1204 12:50:26.604189    4765 start.go:93] Provisioning new machine with config: &{Name:test-preload-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:50:26.604407    4765 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:50:26.623170    4765 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 12:50:26.672004    4765 start.go:159] libmachine.API.Create for "test-preload-500000" (driver="qemu2")
	I1204 12:50:26.672080    4765 client.go:168] LocalClient.Create starting
	I1204 12:50:26.672239    4765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:50:26.672323    4765 main.go:141] libmachine: Decoding PEM data...
	I1204 12:50:26.672345    4765 main.go:141] libmachine: Parsing certificate...
	I1204 12:50:26.672419    4765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:50:26.672482    4765 main.go:141] libmachine: Decoding PEM data...
	I1204 12:50:26.672499    4765 main.go:141] libmachine: Parsing certificate...
	I1204 12:50:26.673130    4765 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:50:26.843861    4765 main.go:141] libmachine: Creating SSH key...
	I1204 12:50:26.915801    4765 main.go:141] libmachine: Creating Disk image...
	I1204 12:50:26.915808    4765 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:50:26.916030    4765 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2
	I1204 12:50:26.926222    4765 main.go:141] libmachine: STDOUT: 
	I1204 12:50:26.926249    4765 main.go:141] libmachine: STDERR: 
	I1204 12:50:26.926321    4765 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2 +20000M
	I1204 12:50:26.935052    4765 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:50:26.935072    4765 main.go:141] libmachine: STDERR: 
	I1204 12:50:26.935088    4765 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2
	I1204 12:50:26.935091    4765 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:50:26.935102    4765 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:50:26.935134    4765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:85:84:36:06:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/test-preload-500000/disk.qcow2
	I1204 12:50:26.937128    4765 main.go:141] libmachine: STDOUT: 
	I1204 12:50:26.937185    4765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:50:26.937202    4765 client.go:171] duration metric: took 265.119459ms to LocalClient.Create
	I1204 12:50:28.660376    4765 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1204 12:50:28.660448    4765 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.44892475s
	I1204 12:50:28.660478    4765 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1204 12:50:28.660506    4765 cache.go:87] Successfully saved all images to host disk.
	I1204 12:50:28.939380    4765 start.go:128] duration metric: took 2.334958083s to createHost
	I1204 12:50:28.939442    4765 start.go:83] releasing machines lock for "test-preload-500000", held for 2.335368583s
	W1204 12:50:28.939755    4765 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-500000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-500000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:50:28.957408    4765 out.go:201] 
	W1204 12:50:28.962366    4765 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:50:28.962401    4765 out.go:270] * 
	* 
	W1204 12:50:28.964990    4765 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:50:28.979242    4765 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-500000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-12-04 12:50:28.997071 -0800 PST m=+3519.291635043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-500000 -n test-preload-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-500000 -n test-preload-500000: exit status 7 (72.053333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-500000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-500000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-500000
--- FAIL: TestPreload (10.07s)

                                                
                                    
x
+
TestScheduledStopUnix (10.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-902000 --memory=2048 --driver=qemu2 
E1204 12:50:30.718510    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-902000 --memory=2048 --driver=qemu2 : exit status 80 (9.852821375s)

                                                
                                                
-- stdout --
	* [scheduled-stop-902000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-902000" primary control-plane node in "scheduled-stop-902000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-902000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-902000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-902000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-902000" primary control-plane node in "scheduled-stop-902000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-902000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-902000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-12-04 12:50:39.005249 -0800 PST m=+3529.299949959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-902000 -n scheduled-stop-902000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-902000 -n scheduled-stop-902000: exit status 7 (72.787333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-902000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-902000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-902000
--- FAIL: TestScheduledStopUnix (10.01s)

                                                
                                    
x
+
TestSkaffold (12.71s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4036297540 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4036297540 version: (1.017580209s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-102000 --memory=2600 --driver=qemu2 
E1204 12:50:47.618158    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-102000 --memory=2600 --driver=qemu2 : exit status 80 (9.875134959s)

                                                
                                                
-- stdout --
	* [skaffold-102000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-102000" primary control-plane node in "skaffold-102000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-102000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-102000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-102000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-102000" primary control-plane node in "skaffold-102000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-102000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-102000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-12-04 12:50:51.723053 -0800 PST m=+3542.017927293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-102000 -n skaffold-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-102000 -n skaffold-102000: exit status 7 (68.678875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-102000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-102000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-102000
--- FAIL: TestSkaffold (12.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (596.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2332252860 start -p running-upgrade-728000 --memory=2200 --vm-driver=qemu2 
E1204 12:52:20.540372    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2332252860 start -p running-upgrade-728000 --memory=2200 --vm-driver=qemu2 : (59.744094833s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-728000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-728000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.794695042s)

                                                
                                                
-- stdout --
	* [running-upgrade-728000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-728000" primary control-plane node in "running-upgrade-728000" cluster
	* Updating the running qemu2 "running-upgrade-728000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:52:35.008913    5191 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:52:35.009586    5191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:52:35.009591    5191 out.go:358] Setting ErrFile to fd 2...
	I1204 12:52:35.009594    5191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:52:35.009763    5191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:52:35.011261    5191 out.go:352] Setting JSON to false
	I1204 12:52:35.030764    5191 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4926,"bootTime":1733340629,"procs":582,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:52:35.030854    5191 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:52:35.036188    5191 out.go:177] * [running-upgrade-728000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:52:35.044138    5191 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:52:35.044184    5191 notify.go:220] Checking for updates...
	I1204 12:52:35.051946    5191 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:52:35.056112    5191 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:52:35.060113    5191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:52:35.063127    5191 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:52:35.066118    5191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:52:35.069370    5191 config.go:182] Loaded profile config "running-upgrade-728000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 12:52:35.071042    5191 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1204 12:52:35.074066    5191 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:52:35.078130    5191 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 12:52:35.083131    5191 start.go:297] selected driver: qemu2
	I1204 12:52:35.083139    5191 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-728000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63639 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-728000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 12:52:35.083196    5191 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:52:35.085858    5191 cni.go:84] Creating CNI manager for ""
	I1204 12:52:35.085884    5191 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:52:35.085907    5191 start.go:340] cluster config:
	{Name:running-upgrade-728000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63639 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-728000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 12:52:35.085954    5191 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:52:35.094065    5191 out.go:177] * Starting "running-upgrade-728000" primary control-plane node in "running-upgrade-728000" cluster
	I1204 12:52:35.098104    5191 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 12:52:35.098124    5191 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1204 12:52:35.098133    5191 cache.go:56] Caching tarball of preloaded images
	I1204 12:52:35.098187    5191 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:52:35.098193    5191 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1204 12:52:35.098245    5191 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/config.json ...
	I1204 12:52:35.098754    5191 start.go:360] acquireMachinesLock for running-upgrade-728000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:52:35.098782    5191 start.go:364] duration metric: took 22.583µs to acquireMachinesLock for "running-upgrade-728000"
	I1204 12:52:35.098791    5191 start.go:96] Skipping create...Using existing machine configuration
	I1204 12:52:35.098794    5191 fix.go:54] fixHost starting: 
	I1204 12:52:35.099368    5191 fix.go:112] recreateIfNeeded on running-upgrade-728000: state=Running err=<nil>
	W1204 12:52:35.099378    5191 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 12:52:35.107097    5191 out.go:177] * Updating the running qemu2 "running-upgrade-728000" VM ...
	I1204 12:52:35.110919    5191 machine.go:93] provisionDockerMachine start ...
	I1204 12:52:35.110959    5191 main.go:141] libmachine: Using SSH client type: native
	I1204 12:52:35.111061    5191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008bafc0] 0x1008bd800 <nil>  [] 0s} localhost 63607 <nil> <nil>}
	I1204 12:52:35.111066    5191 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 12:52:35.171472    5191 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-728000
	
	I1204 12:52:35.171487    5191 buildroot.go:166] provisioning hostname "running-upgrade-728000"
	I1204 12:52:35.171543    5191 main.go:141] libmachine: Using SSH client type: native
	I1204 12:52:35.171652    5191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008bafc0] 0x1008bd800 <nil>  [] 0s} localhost 63607 <nil> <nil>}
	I1204 12:52:35.171658    5191 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-728000 && echo "running-upgrade-728000" | sudo tee /etc/hostname
	I1204 12:52:35.235172    5191 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-728000
	
	I1204 12:52:35.235231    5191 main.go:141] libmachine: Using SSH client type: native
	I1204 12:52:35.235338    5191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008bafc0] 0x1008bd800 <nil>  [] 0s} localhost 63607 <nil> <nil>}
	I1204 12:52:35.235346    5191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-728000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-728000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-728000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 12:52:35.297390    5191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 12:52:35.297399    5191 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19985-1334/.minikube CaCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19985-1334/.minikube}
	I1204 12:52:35.297409    5191 buildroot.go:174] setting up certificates
	I1204 12:52:35.297414    5191 provision.go:84] configureAuth start
	I1204 12:52:35.297419    5191 provision.go:143] copyHostCerts
	I1204 12:52:35.297477    5191 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem, removing ...
	I1204 12:52:35.297483    5191 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem
	I1204 12:52:35.297620    5191 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem (1082 bytes)
	I1204 12:52:35.297811    5191 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem, removing ...
	I1204 12:52:35.297815    5191 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem
	I1204 12:52:35.297856    5191 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem (1123 bytes)
	I1204 12:52:35.297966    5191 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem, removing ...
	I1204 12:52:35.297969    5191 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem
	I1204 12:52:35.298007    5191 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem (1679 bytes)
	I1204 12:52:35.298116    5191 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-728000 san=[127.0.0.1 localhost minikube running-upgrade-728000]
	I1204 12:52:35.334417    5191 provision.go:177] copyRemoteCerts
	I1204 12:52:35.334473    5191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 12:52:35.334481    5191 sshutil.go:53] new ssh client: &{IP:localhost Port:63607 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/running-upgrade-728000/id_rsa Username:docker}
	I1204 12:52:35.366685    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 12:52:35.374347    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1204 12:52:35.381717    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 12:52:35.388445    5191 provision.go:87] duration metric: took 91.022125ms to configureAuth
	I1204 12:52:35.388454    5191 buildroot.go:189] setting minikube options for container-runtime
	I1204 12:52:35.388569    5191 config.go:182] Loaded profile config "running-upgrade-728000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 12:52:35.388611    5191 main.go:141] libmachine: Using SSH client type: native
	I1204 12:52:35.388702    5191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008bafc0] 0x1008bd800 <nil>  [] 0s} localhost 63607 <nil> <nil>}
	I1204 12:52:35.388706    5191 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 12:52:35.451314    5191 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 12:52:35.451323    5191 buildroot.go:70] root file system type: tmpfs
	I1204 12:52:35.451377    5191 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 12:52:35.451436    5191 main.go:141] libmachine: Using SSH client type: native
	I1204 12:52:35.451550    5191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008bafc0] 0x1008bd800 <nil>  [] 0s} localhost 63607 <nil> <nil>}
	I1204 12:52:35.451584    5191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 12:52:35.514424    5191 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 12:52:35.514475    5191 main.go:141] libmachine: Using SSH client type: native
	I1204 12:52:35.514573    5191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008bafc0] 0x1008bd800 <nil>  [] 0s} localhost 63607 <nil> <nil>}
	I1204 12:52:35.514581    5191 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 12:52:35.577551    5191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 12:52:35.577563    5191 machine.go:96] duration metric: took 466.632917ms to provisionDockerMachine
	I1204 12:52:35.577570    5191 start.go:293] postStartSetup for "running-upgrade-728000" (driver="qemu2")
	I1204 12:52:35.577576    5191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 12:52:35.577635    5191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 12:52:35.577644    5191 sshutil.go:53] new ssh client: &{IP:localhost Port:63607 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/running-upgrade-728000/id_rsa Username:docker}
	I1204 12:52:35.615260    5191 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 12:52:35.617161    5191 info.go:137] Remote host: Buildroot 2021.02.12
	I1204 12:52:35.617185    5191 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19985-1334/.minikube/addons for local assets ...
	I1204 12:52:35.617258    5191 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19985-1334/.minikube/files for local assets ...
	I1204 12:52:35.617350    5191 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem -> 18562.pem in /etc/ssl/certs
	I1204 12:52:35.617475    5191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 12:52:35.621281    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem --> /etc/ssl/certs/18562.pem (1708 bytes)
	I1204 12:52:35.628530    5191 start.go:296] duration metric: took 50.954375ms for postStartSetup
	I1204 12:52:35.628543    5191 fix.go:56] duration metric: took 529.7415ms for fixHost
	I1204 12:52:35.628587    5191 main.go:141] libmachine: Using SSH client type: native
	I1204 12:52:35.628694    5191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1008bafc0] 0x1008bd800 <nil>  [] 0s} localhost 63607 <nil> <nil>}
	I1204 12:52:35.628699    5191 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 12:52:35.687939    5191 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733345555.556155805
	
	I1204 12:52:35.687946    5191 fix.go:216] guest clock: 1733345555.556155805
	I1204 12:52:35.687951    5191 fix.go:229] Guest: 2024-12-04 12:52:35.556155805 -0800 PST Remote: 2024-12-04 12:52:35.628545 -0800 PST m=+0.641338084 (delta=-72.389195ms)
	I1204 12:52:35.687961    5191 fix.go:200] guest clock delta is within tolerance: -72.389195ms
	I1204 12:52:35.687964    5191 start.go:83] releasing machines lock for "running-upgrade-728000", held for 589.170625ms
	I1204 12:52:35.688039    5191 ssh_runner.go:195] Run: cat /version.json
	I1204 12:52:35.688047    5191 sshutil.go:53] new ssh client: &{IP:localhost Port:63607 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/running-upgrade-728000/id_rsa Username:docker}
	I1204 12:52:35.688722    5191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 12:52:35.688743    5191 sshutil.go:53] new ssh client: &{IP:localhost Port:63607 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/running-upgrade-728000/id_rsa Username:docker}
	W1204 12:52:35.719277    5191 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1204 12:52:35.719332    5191 ssh_runner.go:195] Run: systemctl --version
	I1204 12:52:35.764747    5191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 12:52:35.766651    5191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 12:52:35.766695    5191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1204 12:52:35.769935    5191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1204 12:52:35.774250    5191 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 12:52:35.774265    5191 start.go:495] detecting cgroup driver to use...
	I1204 12:52:35.774336    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 12:52:35.779630    5191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1204 12:52:35.782986    5191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 12:52:35.786631    5191 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 12:52:35.786659    5191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 12:52:35.790062    5191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 12:52:35.793342    5191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 12:52:35.796200    5191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 12:52:35.799214    5191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 12:52:35.802220    5191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 12:52:35.805661    5191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 12:52:35.808611    5191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 12:52:35.811509    5191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 12:52:35.814748    5191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 12:52:35.817965    5191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:52:35.908202    5191 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 12:52:35.917264    5191 start.go:495] detecting cgroup driver to use...
	I1204 12:52:35.917343    5191 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 12:52:35.925729    5191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 12:52:35.930562    5191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 12:52:35.941131    5191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 12:52:35.945883    5191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 12:52:35.950449    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 12:52:35.956849    5191 ssh_runner.go:195] Run: which cri-dockerd
	I1204 12:52:35.958144    5191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 12:52:35.960763    5191 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1204 12:52:35.965419    5191 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 12:52:36.057079    5191 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 12:52:36.149241    5191 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 12:52:36.149305    5191 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 12:52:36.154679    5191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:52:36.240221    5191 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 12:52:38.936095    5191 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.695822917s)
	I1204 12:52:38.936174    5191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 12:52:38.941093    5191 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 12:52:38.948137    5191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 12:52:38.952747    5191 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 12:52:39.039052    5191 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 12:52:39.108401    5191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:52:39.194188    5191 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 12:52:39.200424    5191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 12:52:39.205119    5191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:52:39.271750    5191 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 12:52:39.311921    5191 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 12:52:39.312027    5191 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 12:52:39.314153    5191 start.go:563] Will wait 60s for crictl version
	I1204 12:52:39.314201    5191 ssh_runner.go:195] Run: which crictl
	I1204 12:52:39.315518    5191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 12:52:39.327279    5191 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1204 12:52:39.327367    5191 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 12:52:39.341085    5191 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 12:52:39.363408    5191 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1204 12:52:39.363570    5191 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1204 12:52:39.365064    5191 kubeadm.go:883] updating cluster {Name:running-upgrade-728000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63639 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-728000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1204 12:52:39.365110    5191 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 12:52:39.365157    5191 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 12:52:39.375607    5191 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 12:52:39.375616    5191 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 12:52:39.375670    5191 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 12:52:39.379012    5191 ssh_runner.go:195] Run: which lz4
	I1204 12:52:39.380301    5191 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 12:52:39.381544    5191 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 12:52:39.381554    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1204 12:52:40.298739    5191 docker.go:653] duration metric: took 918.470583ms to copy over tarball
	I1204 12:52:40.298807    5191 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 12:52:41.444289    5191 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.145454375s)
	I1204 12:52:41.444302    5191 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 12:52:41.460042    5191 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 12:52:41.463383    5191 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1204 12:52:41.468570    5191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:52:41.552180    5191 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 12:52:42.926225    5191 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.37401325s)
	I1204 12:52:42.926340    5191 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 12:52:42.937645    5191 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 12:52:42.937662    5191 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 12:52:42.937669    5191 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 12:52:42.942209    5191 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:52:42.945528    5191 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:52:42.947751    5191 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:52:42.947912    5191 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:52:42.949768    5191 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:52:42.949813    5191 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:52:42.951344    5191 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:52:42.951426    5191 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:52:42.953084    5191 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:52:42.953124    5191 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1204 12:52:42.954072    5191 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:52:42.954395    5191 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:52:42.955484    5191 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1204 12:52:42.955542    5191 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1204 12:52:42.956686    5191 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:52:42.957428    5191 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1204 12:52:43.440573    5191 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:52:43.449517    5191 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:52:43.457440    5191 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1204 12:52:43.457465    5191 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:52:43.457530    5191 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:52:43.463674    5191 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1204 12:52:43.463702    5191 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:52:43.463749    5191 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:52:43.473507    5191 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1204 12:52:43.475037    5191 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1204 12:52:43.484031    5191 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:52:43.496060    5191 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1204 12:52:43.496089    5191 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:52:43.496149    5191 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:52:43.506596    5191 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1204 12:52:43.574872    5191 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:52:43.576068    5191 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1204 12:52:43.589537    5191 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1204 12:52:43.589564    5191 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:52:43.589639    5191 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:52:43.589844    5191 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1204 12:52:43.589853    5191 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1204 12:52:43.589880    5191 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1204 12:52:43.602631    5191 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1204 12:52:43.602889    5191 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1204 12:52:43.603024    5191 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1204 12:52:43.605695    5191 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1204 12:52:43.605705    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1204 12:52:43.614278    5191 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1204 12:52:43.614286    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1204 12:52:43.641533    5191 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1204 12:52:43.657738    5191 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1204 12:52:43.657920    5191 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:52:43.668000    5191 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1204 12:52:43.668019    5191 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:52:43.668084    5191 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:52:43.677926    5191 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1204 12:52:43.678065    5191 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1204 12:52:43.679586    5191 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1204 12:52:43.679597    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1204 12:52:43.722578    5191 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1204 12:52:43.722593    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1204 12:52:43.761235    5191 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1204 12:52:43.783593    5191 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1204 12:52:43.796254    5191 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1204 12:52:43.796280    5191 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1204 12:52:43.796354    5191 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1204 12:52:43.805854    5191 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1204 12:52:43.806010    5191 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1204 12:52:43.807591    5191 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1204 12:52:43.807601    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W1204 12:52:43.843758    5191 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1204 12:52:43.843877    5191 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:52:43.873449    5191 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1204 12:52:43.873473    5191 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:52:43.873537    5191 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:52:43.913034    5191 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 12:52:43.913189    5191 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 12:52:43.924067    5191 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1204 12:52:43.924101    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1204 12:52:44.001741    5191 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 12:52:44.001755    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1204 12:52:44.346065    5191 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 12:52:44.346090    5191 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1204 12:52:44.346098    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1204 12:52:44.473197    5191 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1204 12:52:44.473240    5191 cache_images.go:92] duration metric: took 1.5355465s to LoadCachedImages
	W1204 12:52:44.473297    5191 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1204 12:52:44.473302    5191 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1204 12:52:44.473360    5191 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-728000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-728000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 12:52:44.473447    5191 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1204 12:52:44.487674    5191 cni.go:84] Creating CNI manager for ""
	I1204 12:52:44.487687    5191 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:52:44.487700    5191 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 12:52:44.487714    5191 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-728000 NodeName:running-upgrade-728000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 12:52:44.487790    5191 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-728000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 12:52:44.487858    5191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1204 12:52:44.491068    5191 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 12:52:44.491106    5191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 12:52:44.494390    5191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1204 12:52:44.499519    5191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 12:52:44.504534    5191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1204 12:52:44.509622    5191 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1204 12:52:44.511025    5191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:52:44.579830    5191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 12:52:44.592214    5191 certs.go:68] Setting up /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000 for IP: 10.0.2.15
	I1204 12:52:44.592223    5191 certs.go:194] generating shared ca certs ...
	I1204 12:52:44.592246    5191 certs.go:226] acquiring lock for ca certs: {Name:mk686f72a960a82dacaf4c130e092ac78361d077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:52:44.592412    5191 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.key
	I1204 12:52:44.592449    5191 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.key
	I1204 12:52:44.592455    5191 certs.go:256] generating profile certs ...
	I1204 12:52:44.592513    5191 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/client.key
	I1204 12:52:44.592527    5191 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.key.4da3d720
	I1204 12:52:44.592539    5191 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.crt.4da3d720 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1204 12:52:44.916332    5191 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.crt.4da3d720 ...
	I1204 12:52:44.916352    5191 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.crt.4da3d720: {Name:mk7f55b1b8a74fcdda614aea3a6e21511796ccff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:52:44.916675    5191 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.key.4da3d720 ...
	I1204 12:52:44.916683    5191 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.key.4da3d720: {Name:mkfd51804adedc6bd71dab538a0b2422cd7219c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:52:44.916855    5191 certs.go:381] copying /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.crt.4da3d720 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.crt
	I1204 12:52:44.916980    5191 certs.go:385] copying /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.key.4da3d720 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.key
	I1204 12:52:44.917113    5191 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/proxy-client.key
	I1204 12:52:44.917248    5191 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856.pem (1338 bytes)
	W1204 12:52:44.917272    5191 certs.go:480] ignoring /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856_empty.pem, impossibly tiny 0 bytes
	I1204 12:52:44.917278    5191 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 12:52:44.917298    5191 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem (1082 bytes)
	I1204 12:52:44.917319    5191 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem (1123 bytes)
	I1204 12:52:44.917336    5191 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem (1679 bytes)
	I1204 12:52:44.917372    5191 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem (1708 bytes)
	I1204 12:52:44.917729    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 12:52:44.925514    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 12:52:44.934984    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 12:52:44.945135    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 12:52:44.952088    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 12:52:44.959873    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 12:52:44.967912    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 12:52:44.976053    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 12:52:44.985476    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem --> /usr/share/ca-certificates/18562.pem (1708 bytes)
	I1204 12:52:45.001811    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 12:52:45.008044    5191 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856.pem --> /usr/share/ca-certificates/1856.pem (1338 bytes)
	I1204 12:52:45.015984    5191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 12:52:45.020764    5191 ssh_runner.go:195] Run: openssl version
	I1204 12:52:45.025425    5191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18562.pem && ln -fs /usr/share/ca-certificates/18562.pem /etc/ssl/certs/18562.pem"
	I1204 12:52:45.033763    5191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18562.pem
	I1204 12:52:45.035376    5191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:00 /usr/share/ca-certificates/18562.pem
	I1204 12:52:45.035404    5191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18562.pem
	I1204 12:52:45.037417    5191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18562.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 12:52:45.040131    5191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 12:52:45.043381    5191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:52:45.045026    5191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:52:45.045055    5191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:52:45.046798    5191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 12:52:45.050045    5191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1856.pem && ln -fs /usr/share/ca-certificates/1856.pem /etc/ssl/certs/1856.pem"
	I1204 12:52:45.053037    5191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1856.pem
	I1204 12:52:45.054509    5191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:00 /usr/share/ca-certificates/1856.pem
	I1204 12:52:45.054535    5191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1856.pem
	I1204 12:52:45.056521    5191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1856.pem /etc/ssl/certs/51391683.0"
	I1204 12:52:45.059289    5191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 12:52:45.061223    5191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 12:52:45.063086    5191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 12:52:45.065078    5191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 12:52:45.066853    5191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 12:52:45.068786    5191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 12:52:45.070509    5191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 12:52:45.072480    5191 kubeadm.go:392] StartCluster: {Name:running-upgrade-728000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63639 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-728000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 12:52:45.072552    5191 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 12:52:45.083019    5191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 12:52:45.087780    5191 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 12:52:45.087790    5191 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 12:52:45.087823    5191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 12:52:45.091289    5191 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 12:52:45.091521    5191 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-728000" does not appear in /Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:52:45.091571    5191 kubeconfig.go:62] /Users/jenkins/minikube-integration/19985-1334/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-728000" cluster setting kubeconfig missing "running-upgrade-728000" context setting]
	I1204 12:52:45.091694    5191 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/kubeconfig: {Name:mk18d42ed20876d07306ef2e0f2006c5dc1a1320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:52:45.092119    5191 kapi.go:59] client config for running-upgrade-728000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/client.key", CAFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102317740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 12:52:45.092437    5191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 12:52:45.095411    5191 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-728000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1204 12:52:45.095417    5191 kubeadm.go:1160] stopping kube-system containers ...
	I1204 12:52:45.095471    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 12:52:45.106590    5191 docker.go:483] Stopping containers: [f670be475b38 b36b7fe8756c fff56529fa39 cc374ed7e636 de104476bdec 499812ae8462 c87f5d60400f 06d6fc8f8465 b1a2413c32e7 160c5264bc4d b0296b32762c 340833b0ace9 8bb1b60cb084 c0bc176163d8 cec8da9b20f7]
	I1204 12:52:45.106662    5191 ssh_runner.go:195] Run: docker stop f670be475b38 b36b7fe8756c fff56529fa39 cc374ed7e636 de104476bdec 499812ae8462 c87f5d60400f 06d6fc8f8465 b1a2413c32e7 160c5264bc4d b0296b32762c 340833b0ace9 8bb1b60cb084 c0bc176163d8 cec8da9b20f7
	I1204 12:52:45.835484    5191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 12:52:45.929373    5191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 12:52:45.933276    5191 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Dec  4 20:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Dec  4 20:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  4 20:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Dec  4 20:52 /etc/kubernetes/scheduler.conf
	
	I1204 12:52:45.933320    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/admin.conf
	I1204 12:52:45.936092    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1204 12:52:45.936130    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 12:52:45.939033    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/kubelet.conf
	I1204 12:52:45.941781    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1204 12:52:45.941817    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 12:52:45.945152    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/controller-manager.conf
	I1204 12:52:45.947851    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1204 12:52:45.947881    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 12:52:45.950662    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/scheduler.conf
	I1204 12:52:45.954708    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1204 12:52:45.954766    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 12:52:45.961430    5191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 12:52:45.964790    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:52:45.985872    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:52:46.451388    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:52:46.647208    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:52:46.671387    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:52:46.690542    5191 api_server.go:52] waiting for apiserver process to appear ...
	I1204 12:52:46.690631    5191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:52:47.192815    5191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:52:47.692708    5191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:52:47.697252    5191 api_server.go:72] duration metric: took 1.006700042s to wait for apiserver process to appear ...
	I1204 12:52:47.697260    5191 api_server.go:88] waiting for apiserver healthz status ...
	I1204 12:52:47.697277    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:52:52.699436    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:52:52.699473    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:52:57.699863    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:52:57.699952    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:53:02.700871    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:53:02.700938    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:53:07.701762    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:53:07.701852    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:53:12.703340    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:53:12.703420    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:53:17.705343    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:53:17.705422    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:53:22.707588    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:53:22.707675    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:53:27.710439    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:53:27.710536    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:53:32.713373    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:53:32.713469    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:53:37.714288    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:53:37.714371    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:53:42.717199    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:53:42.717284    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:53:47.720041    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:53:47.720624    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:53:47.758708    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:53:47.758870    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:53:47.781458    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:53:47.781582    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:53:47.797901    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:53:47.797993    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:53:47.810510    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:53:47.810599    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:53:47.821141    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:53:47.821211    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:53:47.832145    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:53:47.832226    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:53:47.842493    5191 logs.go:282] 0 containers: []
	W1204 12:53:47.842507    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:53:47.842586    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:53:47.853220    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:53:47.853250    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:53:47.853256    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:53:47.876338    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:53:47.876352    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:53:47.890608    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:53:47.890618    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:53:47.905112    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:53:47.905122    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:53:47.917195    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:53:47.917206    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:53:47.943865    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:53:47.943874    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:53:47.982016    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:53:47.982026    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:53:48.052605    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:53:48.052616    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:53:48.071396    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:53:48.071407    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:53:48.085823    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:53:48.085835    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:53:48.090377    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:53:48.090385    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:53:48.103145    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:53:48.103156    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:53:48.114825    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:53:48.114837    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:53:48.126184    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:53:48.126195    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:53:48.142176    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:53:48.142189    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:53:48.159769    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:53:48.159783    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:53:50.673678    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:53:55.676234    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:53:55.676711    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:53:55.710888    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:53:55.711036    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:53:55.729076    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:53:55.729186    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:53:55.743768    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:53:55.743844    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:53:55.755523    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:53:55.755605    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:53:55.766523    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:53:55.766593    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:53:55.780580    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:53:55.780657    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:53:55.790579    5191 logs.go:282] 0 containers: []
	W1204 12:53:55.790591    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:53:55.790654    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:53:55.801129    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:53:55.801145    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:53:55.801150    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:53:55.841586    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:53:55.841597    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:53:55.855791    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:53:55.855800    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:53:55.869499    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:53:55.869509    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:53:55.886522    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:53:55.886531    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:53:55.898358    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:53:55.898372    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:53:55.903164    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:53:55.903172    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:53:55.929697    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:53:55.929706    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:53:55.943973    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:53:55.943985    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:53:55.961846    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:53:55.961860    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:53:55.972926    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:53:55.972938    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:53:55.984582    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:53:55.984595    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:53:56.020530    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:53:56.020536    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:53:56.040604    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:53:56.040613    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:53:56.051902    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:53:56.051916    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:53:56.063372    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:53:56.063384    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:53:58.579736    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:54:03.582453    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:54:03.582731    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:54:03.604798    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:54:03.604955    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:54:03.620011    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:54:03.620099    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:54:03.637671    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:54:03.637736    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:54:03.648324    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:54:03.648393    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:54:03.659003    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:54:03.659082    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:54:03.670012    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:54:03.670099    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:54:03.680184    5191 logs.go:282] 0 containers: []
	W1204 12:54:03.680193    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:54:03.680259    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:54:03.693839    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:54:03.694114    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:54:03.694178    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:54:03.706936    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:54:03.706952    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:54:03.724532    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:54:03.724546    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:54:03.735422    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:54:03.735433    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:54:03.749874    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:54:03.749886    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:54:03.788065    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:54:03.788073    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:54:03.792557    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:54:03.792563    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:54:03.809523    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:54:03.809536    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:54:03.836120    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:54:03.836127    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:54:03.847731    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:54:03.847746    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:54:03.861737    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:54:03.861746    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:54:03.874200    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:54:03.874209    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:54:03.910984    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:54:03.910993    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:54:03.925429    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:54:03.925444    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:54:03.940554    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:54:03.940564    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:54:03.955162    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:54:03.955173    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:54:06.471624    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:54:11.472667    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:54:11.472910    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:54:11.495946    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:54:11.496056    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:54:11.510400    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:54:11.510485    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:54:11.530830    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:54:11.530914    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:54:11.541452    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:54:11.541532    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:54:11.551446    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:54:11.551522    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:54:11.562313    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:54:11.562389    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:54:11.576468    5191 logs.go:282] 0 containers: []
	W1204 12:54:11.576484    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:54:11.576543    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:54:11.588841    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:54:11.588858    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:54:11.588863    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:54:11.600752    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:54:11.600764    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:54:11.613924    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:54:11.613934    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:54:11.651239    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:54:11.651254    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:54:11.665287    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:54:11.665298    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:54:11.676752    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:54:11.676763    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:54:11.688531    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:54:11.688544    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:54:11.692943    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:54:11.692950    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:54:11.706805    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:54:11.706819    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:54:11.724599    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:54:11.724609    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:54:11.739987    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:54:11.740001    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:54:11.775727    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:54:11.775736    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:54:11.792724    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:54:11.792737    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:54:11.804204    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:54:11.804217    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:54:11.829748    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:54:11.829757    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:54:11.842081    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:54:11.842089    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:54:14.356296    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:54:19.357760    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:54:19.358252    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:54:19.401584    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:54:19.401736    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:54:19.422360    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:54:19.422470    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:54:19.439594    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:54:19.439669    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:54:19.451545    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:54:19.451617    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:54:19.462398    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:54:19.462469    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:54:19.477455    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:54:19.477534    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:54:19.488786    5191 logs.go:282] 0 containers: []
	W1204 12:54:19.488802    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:54:19.488867    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:54:19.499328    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:54:19.499345    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:54:19.499351    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:54:19.513550    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:54:19.513562    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:54:19.525342    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:54:19.525355    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:54:19.546129    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:54:19.546138    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:54:19.559914    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:54:19.559927    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:54:19.571095    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:54:19.571104    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:54:19.585256    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:54:19.585265    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:54:19.589767    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:54:19.589776    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:54:19.606850    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:54:19.606861    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:54:19.618373    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:54:19.618385    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:54:19.654472    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:54:19.654479    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:54:19.666869    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:54:19.666878    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:54:19.692335    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:54:19.692349    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:54:19.725878    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:54:19.725891    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:54:19.744495    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:54:19.744505    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:54:19.755992    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:54:19.756005    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:54:22.270069    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:54:27.272922    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:54:27.273534    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:54:27.315411    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:54:27.315571    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:54:27.341225    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:54:27.341345    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:54:27.356317    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:54:27.356406    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:54:27.368593    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:54:27.368676    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:54:27.379849    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:54:27.379930    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:54:27.390582    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:54:27.390654    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:54:27.401425    5191 logs.go:282] 0 containers: []
	W1204 12:54:27.401435    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:54:27.401511    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:54:27.412507    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:54:27.412526    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:54:27.412532    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:54:27.424406    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:54:27.424418    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:54:27.436954    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:54:27.436968    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:54:27.449137    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:54:27.449151    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:54:27.486819    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:54:27.486829    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:54:27.501369    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:54:27.501381    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:54:27.519366    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:54:27.519376    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:54:27.523800    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:54:27.523808    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:54:27.542259    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:54:27.542272    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:54:27.560072    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:54:27.560081    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:54:27.575078    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:54:27.575091    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:54:27.587028    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:54:27.587042    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:54:27.607944    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:54:27.607953    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:54:27.622272    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:54:27.622285    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:54:27.658599    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:54:27.658607    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:54:27.682935    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:54:27.682942    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:54:30.197239    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:54:35.200088    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:54:35.200657    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:54:35.239343    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:54:35.239496    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:54:35.261826    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:54:35.261947    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:54:35.277574    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:54:35.277653    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:54:35.289666    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:54:35.289758    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:54:35.301025    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:54:35.301101    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:54:35.312102    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:54:35.312180    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:54:35.323038    5191 logs.go:282] 0 containers: []
	W1204 12:54:35.323050    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:54:35.323118    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:54:35.333984    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:54:35.334001    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:54:35.334007    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:54:35.345864    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:54:35.345877    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:54:35.360054    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:54:35.360067    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:54:35.374113    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:54:35.374126    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:54:35.385652    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:54:35.385663    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:54:35.402766    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:54:35.402777    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:54:35.414625    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:54:35.414639    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:54:35.451804    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:54:35.451818    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:54:35.486539    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:54:35.486552    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:54:35.511446    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:54:35.511459    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:54:35.526112    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:54:35.526124    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:54:35.531092    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:54:35.531102    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:54:35.543220    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:54:35.543232    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:54:35.568455    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:54:35.568466    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:54:35.580100    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:54:35.580111    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:54:35.593818    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:54:35.593828    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:54:38.107164    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:54:43.110040    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:54:43.110587    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:54:43.151207    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:54:43.151359    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:54:43.175854    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:54:43.175985    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:54:43.190852    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:54:43.190936    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:54:43.215312    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:54:43.215379    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:54:43.226291    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:54:43.226365    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:54:43.238252    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:54:43.238325    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:54:43.249911    5191 logs.go:282] 0 containers: []
	W1204 12:54:43.249922    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:54:43.249976    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:54:43.261950    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:54:43.261982    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:54:43.261993    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:54:43.267976    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:54:43.267986    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:54:43.286730    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:54:43.286739    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:54:43.298647    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:54:43.298658    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:54:43.310930    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:54:43.310941    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:54:43.335534    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:54:43.335552    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:54:43.372153    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:54:43.372166    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:54:43.386370    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:54:43.386381    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:54:43.398430    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:54:43.398443    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:54:43.411915    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:54:43.411926    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:54:43.426539    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:54:43.426549    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:54:43.445396    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:54:43.445408    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:54:43.457625    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:54:43.457635    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:54:43.469866    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:54:43.469878    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:54:43.505656    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:54:43.505666    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:54:43.520535    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:54:43.520546    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:54:46.040319    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:54:51.043125    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:54:51.043386    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:54:51.068703    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:54:51.068833    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:54:51.085527    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:54:51.085621    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:54:51.101739    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:54:51.101816    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:54:51.113807    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:54:51.113886    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:54:51.124882    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:54:51.124960    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:54:51.135399    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:54:51.135472    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:54:51.149532    5191 logs.go:282] 0 containers: []
	W1204 12:54:51.149543    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:54:51.149610    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:54:51.160019    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:54:51.160037    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:54:51.160043    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:54:51.196500    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:54:51.196510    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:54:51.210909    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:54:51.210922    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:54:51.223043    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:54:51.223054    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:54:51.234821    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:54:51.234835    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:54:51.252241    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:54:51.252254    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:54:51.276010    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:54:51.276016    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:54:51.312325    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:54:51.312337    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:54:51.327197    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:54:51.327209    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:54:51.339348    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:54:51.339358    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:54:51.343938    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:54:51.343945    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:54:51.356140    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:54:51.356150    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:54:51.368126    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:54:51.368136    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:54:51.382187    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:54:51.382195    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:54:51.399681    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:54:51.399692    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:54:51.413745    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:54:51.413754    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:54:53.927734    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:54:58.929638    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:54:58.930209    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:54:58.971670    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:54:58.971813    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:54:58.994163    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:54:58.994268    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:54:59.007642    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:54:59.007721    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:54:59.019005    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:54:59.019089    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:54:59.031969    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:54:59.032047    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:54:59.042460    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:54:59.042527    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:54:59.053788    5191 logs.go:282] 0 containers: []
	W1204 12:54:59.053804    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:54:59.053869    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:54:59.063971    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:54:59.063987    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:54:59.063996    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:54:59.080995    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:54:59.081005    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:54:59.103560    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:54:59.103572    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:54:59.117568    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:54:59.117581    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:54:59.128596    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:54:59.128607    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:54:59.142753    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:54:59.142765    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:54:59.178687    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:54:59.178698    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:54:59.191278    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:54:59.191289    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:54:59.205383    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:54:59.205395    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:54:59.220086    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:54:59.220096    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:54:59.231494    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:54:59.231504    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:54:59.256810    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:54:59.256817    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:54:59.268124    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:54:59.268134    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:54:59.306625    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:54:59.306636    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:54:59.310846    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:54:59.310855    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:54:59.327862    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:54:59.327872    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:55:01.840090    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:55:06.840683    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:55:06.840898    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:55:06.864904    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:55:06.865011    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:55:06.880287    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:55:06.880377    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:55:06.893594    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:55:06.893670    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:55:06.904822    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:55:06.904897    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:55:06.915657    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:55:06.915724    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:55:06.925923    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:55:06.926001    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:55:06.935315    5191 logs.go:282] 0 containers: []
	W1204 12:55:06.935331    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:55:06.935399    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:55:06.945900    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:55:06.945918    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:55:06.945924    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:55:06.960239    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:55:06.960253    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:55:06.971320    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:55:06.971331    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:55:06.982783    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:55:06.982794    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:55:07.002119    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:55:07.002129    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:55:07.039833    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:55:07.039844    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:55:07.073935    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:55:07.073946    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:55:07.091990    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:55:07.092006    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:55:07.103407    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:55:07.103420    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:55:07.131556    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:55:07.131568    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:55:07.143748    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:55:07.143761    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:55:07.158831    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:55:07.158843    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:55:07.184162    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:55:07.184170    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:55:07.197019    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:55:07.197032    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:55:07.201939    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:55:07.201946    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:55:07.225542    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:55:07.225554    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:55:09.745540    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:55:14.748241    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:55:14.748333    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:55:14.763029    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:55:14.763107    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:55:14.774579    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:55:14.774662    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:55:14.785273    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:55:14.785340    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:55:14.795969    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:55:14.796051    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:55:14.806628    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:55:14.806702    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:55:14.817233    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:55:14.817310    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:55:14.827981    5191 logs.go:282] 0 containers: []
	W1204 12:55:14.827998    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:55:14.828064    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:55:14.838973    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:55:14.838995    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:55:14.839000    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:55:14.854317    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:55:14.854328    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:55:14.868787    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:55:14.868799    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:55:14.886636    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:55:14.886647    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:55:14.898842    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:55:14.898855    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:55:14.913448    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:55:14.913461    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:55:14.925269    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:55:14.925281    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:55:14.929740    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:55:14.929749    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:55:14.965371    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:55:14.965383    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:55:14.983013    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:55:14.983021    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:55:14.994391    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:55:14.994401    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:55:15.019064    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:55:15.019076    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:55:15.059847    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:55:15.059869    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:55:15.076771    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:55:15.076787    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:55:15.090642    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:55:15.090654    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:55:15.122990    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:55:15.123008    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:55:17.640746    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:55:22.643130    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:55:22.643250    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:55:22.655334    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:55:22.655418    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:55:22.666971    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:55:22.667067    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:55:22.678056    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:55:22.678141    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:55:22.689856    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:55:22.689942    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:55:22.700355    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:55:22.700435    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:55:22.711265    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:55:22.711346    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:55:22.722054    5191 logs.go:282] 0 containers: []
	W1204 12:55:22.722064    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:55:22.722123    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:55:22.732692    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:55:22.732708    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:55:22.732714    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:55:22.747246    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:55:22.747255    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:55:22.765258    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:55:22.765268    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:55:22.781939    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:55:22.781949    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:55:22.799063    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:55:22.799074    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:55:22.815408    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:55:22.815418    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:55:22.830360    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:55:22.830372    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:55:22.848093    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:55:22.848104    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:55:22.859820    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:55:22.859833    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:55:22.871807    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:55:22.871821    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:55:22.884084    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:55:22.884095    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:55:22.922793    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:55:22.922809    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:55:22.935213    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:55:22.935230    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:55:22.973098    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:55:22.973110    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:55:22.986063    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:55:22.986075    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:55:22.990262    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:55:22.990271    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:55:25.515646    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:55:30.518469    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:55:30.518843    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:55:30.548852    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:55:30.549003    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:55:30.567554    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:55:30.567671    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:55:30.582685    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:55:30.582777    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:55:30.595998    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:55:30.596077    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:55:30.607403    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:55:30.607484    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:55:30.618955    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:55:30.619036    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:55:30.630306    5191 logs.go:282] 0 containers: []
	W1204 12:55:30.630319    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:55:30.630392    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:55:30.642435    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:55:30.642455    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:55:30.642464    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:55:30.685242    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:55:30.685256    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:55:30.699200    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:55:30.699214    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:55:30.715017    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:55:30.715031    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:55:30.741990    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:55:30.742004    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:55:30.759190    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:55:30.759204    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:55:30.799952    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:55:30.799968    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:55:30.804639    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:55:30.804647    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:55:30.819148    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:55:30.819164    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:55:30.837468    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:55:30.837478    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:55:30.848885    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:55:30.848900    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:55:30.862873    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:55:30.862886    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:55:30.885505    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:55:30.885516    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:55:30.899915    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:55:30.899929    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:55:30.915094    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:55:30.915106    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:55:30.928170    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:55:30.928186    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:55:33.453390    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:55:38.455770    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:55:38.456287    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:55:38.485245    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:55:38.485375    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:55:38.503437    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:55:38.503518    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:55:38.517326    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:55:38.517393    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:55:38.528911    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:55:38.528974    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:55:38.539531    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:55:38.539596    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:55:38.550330    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:55:38.550396    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:55:38.563833    5191 logs.go:282] 0 containers: []
	W1204 12:55:38.563846    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:55:38.563906    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:55:38.575102    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:55:38.575118    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:55:38.575123    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:55:38.592275    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:55:38.592287    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:55:38.596843    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:55:38.596851    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:55:38.608633    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:55:38.608647    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:55:38.620920    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:55:38.620934    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:55:38.639294    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:55:38.639303    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:55:38.651427    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:55:38.651441    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:55:38.665366    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:55:38.665378    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:55:38.687146    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:55:38.687161    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:55:38.704170    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:55:38.704180    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:55:38.740996    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:55:38.741006    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:55:38.755984    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:55:38.755995    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:55:38.771027    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:55:38.771036    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:55:38.795667    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:55:38.795674    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:55:38.829904    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:55:38.829913    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:55:38.842020    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:55:38.842031    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:55:41.358280    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:55:46.361092    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:55:46.361312    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:55:46.382399    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:55:46.382509    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:55:46.405951    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:55:46.406031    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:55:46.422573    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:55:46.422658    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:55:46.442804    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:55:46.442890    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:55:46.459340    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:55:46.459429    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:55:46.470717    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:55:46.470813    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:55:46.486622    5191 logs.go:282] 0 containers: []
	W1204 12:55:46.486635    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:55:46.486706    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:55:46.497683    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:55:46.497705    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:55:46.497714    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:55:46.518279    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:55:46.518292    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:55:46.542210    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:55:46.542228    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:55:46.554583    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:55:46.554597    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:55:46.559317    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:55:46.559326    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:55:46.605288    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:55:46.605301    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:55:46.619169    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:55:46.619180    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:55:46.635634    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:55:46.635646    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:55:46.652976    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:55:46.652989    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:55:46.664635    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:55:46.664650    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:55:46.678029    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:55:46.678042    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:55:46.690120    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:55:46.690132    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:55:46.727773    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:55:46.727781    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:55:46.741377    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:55:46.741387    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:55:46.752889    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:55:46.752902    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:55:46.770063    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:55:46.770075    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:55:49.286383    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:55:54.289320    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:55:54.289507    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:55:54.301656    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:55:54.301751    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:55:54.312780    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:55:54.312874    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:55:54.323561    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:55:54.323640    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:55:54.334523    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:55:54.334609    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:55:54.345840    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:55:54.345920    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:55:54.357325    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:55:54.357403    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:55:54.368339    5191 logs.go:282] 0 containers: []
	W1204 12:55:54.368358    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:55:54.368424    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:55:54.379410    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:55:54.379428    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:55:54.379433    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:55:54.417622    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:55:54.417629    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:55:54.429091    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:55:54.429103    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:55:54.449343    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:55:54.449355    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:55:54.461637    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:55:54.461648    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:55:54.481197    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:55:54.481208    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:55:54.501006    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:55:54.501017    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:55:54.515595    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:55:54.515607    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:55:54.527211    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:55:54.527220    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:55:54.564891    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:55:54.564907    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:55:54.577081    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:55:54.577091    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:55:54.590756    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:55:54.590766    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:55:54.616035    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:55:54.616043    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:55:54.620482    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:55:54.620487    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:55:54.634688    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:55:54.634700    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:55:54.649434    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:55:54.649443    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:55:57.166078    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:02.168473    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:02.168825    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:02.196463    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:02.196598    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:02.214040    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:02.214141    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:02.227419    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:02.227529    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:02.239651    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:02.239728    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:02.251490    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:02.251590    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:02.262132    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:02.262216    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:02.272239    5191 logs.go:282] 0 containers: []
	W1204 12:56:02.272258    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:02.272320    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:02.282873    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:02.282889    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:02.282894    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:02.294728    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:02.294738    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:02.305744    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:02.305756    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:02.319375    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:02.319384    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:02.333497    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:02.333509    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:02.345017    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:02.345034    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:02.349393    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:02.349400    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:02.363209    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:02.363225    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:02.384485    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:02.384497    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:02.396379    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:02.396390    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:02.419658    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:02.419669    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:02.437471    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:02.437482    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:02.452306    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:02.452316    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:02.486730    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:02.486744    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:02.504603    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:02.504617    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:02.515918    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:02.515932    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:05.056889    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:10.059650    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:10.059766    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:10.077395    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:10.077529    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:10.089823    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:10.089905    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:10.101897    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:10.101996    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:10.113450    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:10.113534    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:10.125188    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:10.125266    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:10.137077    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:10.137155    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:10.147718    5191 logs.go:282] 0 containers: []
	W1204 12:56:10.147728    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:10.147797    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:10.164840    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:10.164865    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:10.164872    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:10.180136    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:10.180150    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:10.201165    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:10.201189    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:10.246747    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:10.246760    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:10.264827    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:10.264840    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:10.277719    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:10.277733    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:10.291654    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:10.291668    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:10.312101    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:10.312124    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:10.326765    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:10.326778    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:10.342758    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:10.342776    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:10.356387    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:10.356400    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:10.370458    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:10.370472    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:10.410203    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:10.410228    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:10.416223    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:10.416234    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:10.442665    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:10.442686    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:10.459162    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:10.459180    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:12.974313    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:17.976536    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:17.976673    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:17.988960    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:17.989053    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:18.000366    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:18.000453    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:18.012500    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:18.012707    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:18.024183    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:18.024249    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:18.038936    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:18.039003    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:18.049820    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:18.049885    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:18.060375    5191 logs.go:282] 0 containers: []
	W1204 12:56:18.060387    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:18.060447    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:18.071202    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:18.071216    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:18.071221    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:18.085192    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:18.085202    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:18.104105    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:18.104121    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:18.116308    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:18.116318    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:18.128660    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:18.128693    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:18.133716    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:18.133723    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:18.148415    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:18.148424    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:18.163216    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:18.163225    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:18.175807    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:18.175817    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:18.194051    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:18.194063    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:18.209193    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:18.209208    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:18.232074    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:18.232090    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:18.245146    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:18.245158    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:18.283321    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:18.283341    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:18.320142    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:18.320154    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:18.338045    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:18.338056    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:20.852795    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:25.855139    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:25.855287    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:25.867541    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:25.867624    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:25.878443    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:25.878523    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:25.889746    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:25.889825    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:25.901167    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:25.901235    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:25.911960    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:25.912035    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:25.923273    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:25.923344    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:25.945896    5191 logs.go:282] 0 containers: []
	W1204 12:56:25.945911    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:25.945981    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:25.956279    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:25.956298    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:25.956305    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:25.970524    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:25.970538    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:25.981657    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:25.981669    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:25.993378    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:25.993391    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:26.031531    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:26.031541    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:26.048628    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:26.048638    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:26.062840    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:26.062850    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:26.077384    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:26.077395    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:26.088298    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:26.088310    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:26.112522    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:26.112530    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:26.124550    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:26.124584    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:26.142098    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:26.142107    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:26.153724    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:26.153733    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:26.158024    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:26.158031    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:26.192373    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:26.192384    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:26.204987    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:26.204997    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:28.721665    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:33.723920    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:33.724035    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:33.757794    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:33.757879    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:33.776654    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:33.776740    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:33.788609    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:33.788689    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:33.799496    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:33.799575    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:33.810624    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:33.810700    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:33.821394    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:33.821477    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:33.832121    5191 logs.go:282] 0 containers: []
	W1204 12:56:33.832135    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:33.832204    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:33.843143    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:33.843160    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:33.843166    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:33.861809    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:33.861823    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:33.879752    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:33.879766    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:33.914686    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:33.914700    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:33.927116    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:33.927127    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:33.939970    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:33.939983    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:33.962849    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:33.962858    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:33.973796    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:33.973809    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:33.985663    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:33.985676    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:34.022113    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:34.022128    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:34.026494    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:34.026503    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:34.040108    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:34.040121    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:34.052592    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:34.052606    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:34.066614    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:34.066624    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:34.080703    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:34.080716    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:34.095181    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:34.095195    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:36.609300    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:41.611720    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:41.611918    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:41.626028    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:41.626118    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:41.637948    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:41.638021    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:41.653284    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:41.653367    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:41.664102    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:41.664164    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:41.674622    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:41.674686    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:41.685786    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:41.685855    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:41.696749    5191 logs.go:282] 0 containers: []
	W1204 12:56:41.696763    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:41.696830    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:41.707706    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:41.707723    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:41.707729    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:41.723286    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:41.723299    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:41.741335    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:41.741350    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:41.756004    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:41.756017    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:41.767215    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:41.767225    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:41.781612    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:41.781621    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:41.793368    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:41.793381    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:41.805015    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:41.805027    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:41.841859    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:41.841871    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:41.875805    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:41.875821    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:41.888950    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:41.888961    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:41.906577    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:41.906588    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:41.924484    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:41.924497    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:41.939444    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:41.939460    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:41.963736    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:41.963746    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:41.968572    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:41.968580    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:44.486190    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:49.488652    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:49.488699    5191 kubeadm.go:597] duration metric: took 4m4.397901333s to restartPrimaryControlPlane
	W1204 12:56:49.488740    5191 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 12:56:49.488759    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1204 12:56:50.488608    5191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 12:56:50.494058    5191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 12:56:50.497093    5191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 12:56:50.500025    5191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 12:56:50.500031    5191 kubeadm.go:157] found existing configuration files:
	
	I1204 12:56:50.500061    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/admin.conf
	I1204 12:56:50.502730    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 12:56:50.502766    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 12:56:50.505505    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/kubelet.conf
	I1204 12:56:50.508692    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 12:56:50.508730    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 12:56:50.511859    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/controller-manager.conf
	I1204 12:56:50.514765    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 12:56:50.514798    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 12:56:50.517340    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/scheduler.conf
	I1204 12:56:50.520527    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 12:56:50.520560    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 12:56:50.524103    5191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 12:56:50.544213    5191 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1204 12:56:50.544258    5191 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 12:56:50.603998    5191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 12:56:50.604084    5191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 12:56:50.604194    5191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 12:56:50.653559    5191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 12:56:50.657608    5191 out.go:235]   - Generating certificates and keys ...
	I1204 12:56:50.657639    5191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 12:56:50.657665    5191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 12:56:50.657703    5191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 12:56:50.657733    5191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 12:56:50.657773    5191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 12:56:50.657804    5191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 12:56:50.657839    5191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 12:56:50.657871    5191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 12:56:50.657911    5191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 12:56:50.657946    5191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 12:56:50.657972    5191 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 12:56:50.658002    5191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 12:56:50.717960    5191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 12:56:50.801884    5191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 12:56:50.836210    5191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 12:56:50.876747    5191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 12:56:50.909501    5191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 12:56:50.909889    5191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 12:56:50.909936    5191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 12:56:50.998369    5191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 12:56:51.002294    5191 out.go:235]   - Booting up control plane ...
	I1204 12:56:51.002335    5191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 12:56:51.002372    5191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 12:56:51.002410    5191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 12:56:51.002456    5191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 12:56:51.010916    5191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 12:56:55.513022    5191 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501964 seconds
	I1204 12:56:55.513083    5191 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 12:56:55.516421    5191 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 12:56:56.038013    5191 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 12:56:56.038414    5191 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-728000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 12:56:56.542066    5191 kubeadm.go:310] [bootstrap-token] Using token: 6zki70.26reqbzbfpvltfx2
	I1204 12:56:56.547777    5191 out.go:235]   - Configuring RBAC rules ...
	I1204 12:56:56.547840    5191 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 12:56:56.547885    5191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 12:56:56.549946    5191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 12:56:56.555187    5191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 12:56:56.556303    5191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 12:56:56.557118    5191 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 12:56:56.562647    5191 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 12:56:56.737376    5191 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 12:56:56.946436    5191 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 12:56:56.946845    5191 kubeadm.go:310] 
	I1204 12:56:56.946881    5191 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 12:56:56.946913    5191 kubeadm.go:310] 
	I1204 12:56:56.946949    5191 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 12:56:56.946976    5191 kubeadm.go:310] 
	I1204 12:56:56.946992    5191 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 12:56:56.947032    5191 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 12:56:56.947057    5191 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 12:56:56.947059    5191 kubeadm.go:310] 
	I1204 12:56:56.947107    5191 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 12:56:56.947112    5191 kubeadm.go:310] 
	I1204 12:56:56.947133    5191 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 12:56:56.947139    5191 kubeadm.go:310] 
	I1204 12:56:56.947165    5191 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 12:56:56.947202    5191 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 12:56:56.947241    5191 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 12:56:56.947245    5191 kubeadm.go:310] 
	I1204 12:56:56.947290    5191 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 12:56:56.947358    5191 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 12:56:56.947394    5191 kubeadm.go:310] 
	I1204 12:56:56.947434    5191 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6zki70.26reqbzbfpvltfx2 \
	I1204 12:56:56.947484    5191 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7d8c9ff99071ccd6c2c996325e17b7e464f4a0a980b55e37863d1d8ca70e7d83 \
	I1204 12:56:56.947495    5191 kubeadm.go:310] 	--control-plane 
	I1204 12:56:56.947498    5191 kubeadm.go:310] 
	I1204 12:56:56.947544    5191 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 12:56:56.947552    5191 kubeadm.go:310] 
	I1204 12:56:56.947592    5191 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6zki70.26reqbzbfpvltfx2 \
	I1204 12:56:56.947644    5191 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7d8c9ff99071ccd6c2c996325e17b7e464f4a0a980b55e37863d1d8ca70e7d83 
	I1204 12:56:56.947705    5191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 12:56:56.947716    5191 cni.go:84] Creating CNI manager for ""
	I1204 12:56:56.947727    5191 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:56:56.951535    5191 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 12:56:56.956530    5191 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 12:56:56.959788    5191 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 12:56:56.965746    5191 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 12:56:56.965830    5191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 12:56:56.965857    5191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-728000 minikube.k8s.io/updated_at=2024_12_04T12_56_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=running-upgrade-728000 minikube.k8s.io/primary=true
	I1204 12:56:57.006662    5191 ops.go:34] apiserver oom_adj: -16
	I1204 12:56:57.006661    5191 kubeadm.go:1113] duration metric: took 40.901042ms to wait for elevateKubeSystemPrivileges
	I1204 12:56:57.006676    5191 kubeadm.go:394] duration metric: took 4m11.931105333s to StartCluster
	I1204 12:56:57.006686    5191 settings.go:142] acquiring lock: {Name:mkc9bc1437987e3de306bb25e3c2f4effe0b8b57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:57.006789    5191 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:56:57.007201    5191 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/kubeconfig: {Name:mk18d42ed20876d07306ef2e0f2006c5dc1a1320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:57.007405    5191 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:56:57.007416    5191 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 12:56:57.007453    5191 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-728000"
	I1204 12:56:57.007456    5191 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-728000"
	I1204 12:56:57.007466    5191 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-728000"
	W1204 12:56:57.007471    5191 addons.go:243] addon storage-provisioner should already be in state true
	I1204 12:56:57.007485    5191 host.go:66] Checking if "running-upgrade-728000" exists ...
	I1204 12:56:57.007467    5191 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-728000"
	I1204 12:56:57.007597    5191 config.go:182] Loaded profile config "running-upgrade-728000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 12:56:57.008473    5191 kapi.go:59] client config for running-upgrade-728000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/client.key", CAFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102317740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 12:56:57.008781    5191 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-728000"
	W1204 12:56:57.008786    5191 addons.go:243] addon default-storageclass should already be in state true
	I1204 12:56:57.008793    5191 host.go:66] Checking if "running-upgrade-728000" exists ...
	I1204 12:56:57.011596    5191 out.go:177] * Verifying Kubernetes components...
	I1204 12:56:57.012004    5191 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 12:56:57.015637    5191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 12:56:57.015644    5191 sshutil.go:53] new ssh client: &{IP:localhost Port:63607 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/running-upgrade-728000/id_rsa Username:docker}
	I1204 12:56:57.019469    5191 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:57.023533    5191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:57.026561    5191 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 12:56:57.026567    5191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 12:56:57.026573    5191 sshutil.go:53] new ssh client: &{IP:localhost Port:63607 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/running-upgrade-728000/id_rsa Username:docker}
	I1204 12:56:57.116127    5191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 12:56:57.121858    5191 api_server.go:52] waiting for apiserver process to appear ...
	I1204 12:56:57.121909    5191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:56:57.126108    5191 api_server.go:72] duration metric: took 118.68875ms to wait for apiserver process to appear ...
	I1204 12:56:57.126115    5191 api_server.go:88] waiting for apiserver healthz status ...
	I1204 12:56:57.126123    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:57.158685    5191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 12:56:57.172950    5191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 12:56:57.512988    5191 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 12:56:57.513001    5191 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 12:57:02.128254    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:02.128297    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:07.128659    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:07.128692    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:12.129071    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:12.129104    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:17.129554    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:17.129581    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:22.130193    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:22.130239    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:27.131217    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:27.131265    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1204 12:57:27.515775    5191 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1204 12:57:27.519647    5191 out.go:177] * Enabled addons: storage-provisioner
	I1204 12:57:27.531511    5191 addons.go:510] duration metric: took 30.523720375s for enable addons: enabled=[storage-provisioner]
	I1204 12:57:32.132370    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:32.132437    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:37.133901    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:37.133963    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:42.135607    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:42.135671    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:47.137744    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:47.137791    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:52.140087    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:52.140115    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:57.142387    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:57.142527    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:57.154864    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:57:57.154938    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:57.165397    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:57:57.165477    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:57.175984    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:57:57.176065    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:57.187420    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:57:57.187504    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:57.199013    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:57:57.199110    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:57.209045    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:57:57.209116    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:57.219339    5191 logs.go:282] 0 containers: []
	W1204 12:57:57.219351    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:57.219415    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:57.229862    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:57:57.229877    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:57.229883    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:57.254042    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:57.254052    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:57.287538    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:57:57.287549    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:57:57.301525    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:57:57.301535    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:57:57.312733    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:57:57.312746    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:57:57.326999    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:57:57.327011    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:57:57.348438    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:57:57.348449    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:57:57.359826    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:57.359836    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:57.364500    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:57.364507    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:57.401529    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:57:57.401543    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:57:57.415594    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:57:57.415607    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:57:57.427300    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:57:57.427313    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:57:57.439422    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:57:57.439435    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:59.952779    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:04.955106    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:04.955222    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:04.968071    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:04.968158    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:04.978863    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:04.978935    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:04.989200    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:04.989265    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:04.999975    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:05.000052    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:05.010431    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:05.010511    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:05.022178    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:05.022252    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:05.032834    5191 logs.go:282] 0 containers: []
	W1204 12:58:05.032847    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:05.032914    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:05.043629    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:05.043647    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:05.043653    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:05.057575    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:05.057586    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:05.070051    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:05.070061    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:05.081983    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:05.081995    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:05.097129    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:05.097138    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:05.108820    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:05.108832    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:05.127478    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:05.127490    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:05.162525    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:05.162536    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:05.167565    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:05.167572    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:05.204203    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:05.204215    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:05.222980    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:05.222992    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:05.235159    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:05.235171    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:05.254437    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:05.254448    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:07.782550    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:12.784990    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:12.785162    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:12.801585    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:12.801681    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:12.814431    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:12.814507    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:12.825350    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:12.825426    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:12.837692    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:12.837771    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:12.848876    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:12.848954    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:12.859704    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:12.859780    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:12.870153    5191 logs.go:282] 0 containers: []
	W1204 12:58:12.870164    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:12.870225    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:12.880787    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:12.880803    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:12.880809    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:12.896507    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:12.896521    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:12.921634    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:12.921642    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:12.933713    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:12.933725    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:12.938963    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:12.938970    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:12.953567    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:12.953578    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:12.964757    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:12.964768    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:12.977245    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:12.977257    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:12.990054    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:12.990067    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:13.024324    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:13.024336    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:13.064187    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:13.064198    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:13.078674    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:13.078687    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:13.093358    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:13.093370    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:15.613335    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:20.615743    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:20.615922    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:20.631540    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:20.631626    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:20.644665    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:20.644748    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:20.655099    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:20.655169    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:20.667722    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:20.667804    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:20.680357    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:20.680431    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:20.690728    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:20.690797    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:20.701235    5191 logs.go:282] 0 containers: []
	W1204 12:58:20.701247    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:20.701311    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:20.711375    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:20.711395    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:20.711401    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:20.746670    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:20.746681    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:20.760486    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:20.760499    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:20.771884    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:20.771897    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:20.783504    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:20.783518    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:20.795518    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:20.795533    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:20.799939    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:20.799948    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:20.816150    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:20.816161    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:20.831738    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:20.831752    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:20.846303    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:20.846317    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:20.857980    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:20.857993    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:20.875162    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:20.875173    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:20.899211    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:20.899223    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:23.434642    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:28.436948    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:28.437143    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:28.454229    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:28.454326    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:28.467557    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:28.467632    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:28.479081    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:28.479157    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:28.489672    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:28.489747    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:28.508527    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:28.508604    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:28.523927    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:28.524001    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:28.534466    5191 logs.go:282] 0 containers: []
	W1204 12:58:28.534487    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:28.534562    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:28.544958    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:28.544975    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:28.544981    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:28.568851    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:28.568858    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:28.603569    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:28.603577    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:28.617495    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:28.617508    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:28.633315    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:28.633326    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:28.644939    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:28.644949    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:28.660419    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:28.660428    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:28.677857    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:28.677868    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:28.689967    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:28.689979    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:28.703507    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:28.703523    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:28.708397    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:28.708403    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:28.743875    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:28.743892    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:28.759442    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:28.759457    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:31.273302    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:36.275612    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:36.275802    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:36.292476    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:36.292571    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:36.311370    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:36.311451    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:36.322429    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:36.322515    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:36.333888    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:36.333963    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:36.345096    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:36.345179    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:36.356227    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:36.356306    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:36.366110    5191 logs.go:282] 0 containers: []
	W1204 12:58:36.366127    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:36.366197    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:36.376546    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:36.376560    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:36.376567    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:36.389331    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:36.389344    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:36.425080    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:36.425092    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:36.463903    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:36.463920    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:36.483757    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:36.483771    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:36.495508    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:36.495522    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:36.507202    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:36.507212    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:36.519439    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:36.519450    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:36.524039    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:36.524046    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:36.539084    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:36.539094    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:36.553168    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:36.553184    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:36.571432    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:36.571443    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:36.583673    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:36.583683    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:39.110595    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:44.113024    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:44.113166    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:44.125856    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:44.125949    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:44.136548    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:44.136625    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:44.147246    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:44.147333    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:44.158012    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:44.158093    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:44.169007    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:44.169084    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:44.179289    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:44.179363    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:44.189999    5191 logs.go:282] 0 containers: []
	W1204 12:58:44.190012    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:44.190079    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:44.201044    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:44.201063    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:44.201069    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:44.226540    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:44.226551    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:44.262007    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:44.262016    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:44.267006    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:44.267014    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:44.302815    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:44.302825    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:44.314739    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:44.314750    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:44.334506    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:44.334518    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:44.348275    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:44.348286    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:44.360815    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:44.360825    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:44.380956    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:44.380965    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:44.395163    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:44.395174    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:44.406774    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:44.406784    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:44.421213    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:44.421224    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:46.935190    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:51.936866    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:51.937043    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:51.953367    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:51.953441    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:51.963650    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:51.963724    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:51.974786    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:51.974869    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:51.999423    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:51.999498    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:52.010138    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:52.010219    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:52.025286    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:52.025373    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:52.035823    5191 logs.go:282] 0 containers: []
	W1204 12:58:52.035838    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:52.035919    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:52.046633    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:52.046651    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:52.046657    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:52.061195    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:52.061205    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:52.085231    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:52.085240    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:52.097351    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:52.097361    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:52.121036    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:52.121045    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:52.136064    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:52.136074    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:52.147797    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:52.147810    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:52.182774    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:52.182786    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:52.187732    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:52.187739    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:52.222734    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:52.222747    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:52.237578    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:52.237589    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:52.255166    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:52.255176    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:52.267282    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:52.267293    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:54.781429    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:59.783779    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:59.783945    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:59.795343    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:59.795427    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:59.805836    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:59.805920    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:59.816575    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:59.816652    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:59.826892    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:59.826960    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:59.837836    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:59.837911    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:59.848331    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:59.848411    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:59.858565    5191 logs.go:282] 0 containers: []
	W1204 12:58:59.858577    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:59.858640    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:59.868781    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:59.868796    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:59.868804    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:59.880688    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:59.880703    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:59.915499    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:59.915508    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:59.927076    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:59.927087    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:59.939262    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:59.939274    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:59.953569    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:59.953578    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:59.965004    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:59.965015    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:59.982720    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:59.982729    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:59.987717    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:59.987725    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:00.028512    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:00.028523    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:00.043420    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:00.043433    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:00.057768    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:00.057780    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:00.069486    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:00.069500    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:02.595448    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:07.597805    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:07.597899    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:07.610456    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:07.610538    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:07.624142    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:07.624221    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:07.636309    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:59:07.636384    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:07.647518    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:07.647599    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:07.659365    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:07.659452    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:07.670910    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:07.670987    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:07.681752    5191 logs.go:282] 0 containers: []
	W1204 12:59:07.681763    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:07.681825    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:07.693251    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:07.693266    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:07.693272    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:07.705216    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:07.705228    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:07.720266    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:07.720280    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:07.732940    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:07.732949    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:07.750888    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:07.750898    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:07.774529    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:07.774545    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:07.807583    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:07.807591    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:07.811867    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:07.811873    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:07.847697    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:07.847712    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:07.859347    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:07.859359    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:07.870662    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:07.870677    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:07.885446    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:07.885460    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:07.899499    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:07.899515    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:10.413520    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:15.415878    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:15.415981    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:15.427346    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:15.427432    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:15.439066    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:15.439147    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:15.453965    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:15.454074    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:15.472394    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:15.472475    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:15.484211    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:15.484291    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:15.504593    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:15.504678    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:15.515483    5191 logs.go:282] 0 containers: []
	W1204 12:59:15.515496    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:15.515565    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:15.526830    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:15.526847    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:15.526852    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:15.547396    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:15.547409    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:15.562380    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:15.562392    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:15.601099    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:15.601112    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:15.619179    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:15.619193    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:15.630842    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:15.630853    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:15.642859    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:15.642870    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:15.676418    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:15.676430    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:15.688042    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:15.688058    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:15.699533    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:15.699545    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:15.711428    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:15.711439    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:15.723718    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:15.723729    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:15.738844    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:15.738855    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:15.750979    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:15.750991    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:15.774753    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:15.774760    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:18.281729    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:23.284175    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:23.284275    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:23.295898    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:23.295984    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:23.308212    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:23.308294    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:23.320020    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:23.320102    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:23.331763    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:23.331843    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:23.345752    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:23.345826    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:23.357601    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:23.357680    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:23.369314    5191 logs.go:282] 0 containers: []
	W1204 12:59:23.369323    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:23.369387    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:23.381490    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:23.381507    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:23.381515    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:23.394585    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:23.394600    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:23.421450    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:23.421471    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:23.441373    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:23.441388    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:23.458578    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:23.458595    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:23.472081    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:23.472094    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:23.484863    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:23.484874    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:23.500474    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:23.500485    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:23.520884    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:23.520896    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:23.541025    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:23.541035    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:23.577911    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:23.577934    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:23.618187    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:23.618198    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:23.629389    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:23.629404    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:23.641327    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:23.641337    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:23.666019    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:23.666036    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:26.171871    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:31.173086    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:31.173206    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:31.185155    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:31.185260    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:31.196801    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:31.196875    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:31.213673    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:31.213756    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:31.225047    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:31.225125    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:31.235993    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:31.236072    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:31.248919    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:31.249005    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:31.263959    5191 logs.go:282] 0 containers: []
	W1204 12:59:31.263973    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:31.264050    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:31.276062    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:31.276079    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:31.276085    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:31.289488    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:31.289498    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:31.301882    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:31.301892    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:31.317351    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:31.317362    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:31.331732    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:31.331742    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:31.345511    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:31.345524    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:31.384069    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:31.384081    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:31.398711    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:31.398721    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:31.410630    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:31.410641    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:31.423604    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:31.423619    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:31.442349    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:31.442359    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:31.479647    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:31.479661    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:31.494926    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:31.494937    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:31.520975    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:31.520988    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:31.533605    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:31.533620    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:34.039664    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:39.041933    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:39.042013    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:39.053557    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:39.053639    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:39.065124    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:39.065203    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:39.076655    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:39.076734    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:39.088100    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:39.088175    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:39.099057    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:39.099119    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:39.110521    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:39.110583    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:39.125414    5191 logs.go:282] 0 containers: []
	W1204 12:59:39.125425    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:39.125497    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:39.136692    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:39.136708    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:39.136713    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:39.174666    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:39.174679    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:39.201121    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:39.201132    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:39.220557    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:39.220568    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:39.232851    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:39.232867    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:39.267847    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:39.267859    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:39.280203    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:39.280212    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:39.293328    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:39.293339    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:39.307884    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:39.307896    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:39.320310    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:39.320321    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:39.335641    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:39.335653    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:39.349416    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:39.349428    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:39.361980    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:39.361991    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:39.367423    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:39.367436    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:39.382834    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:39.382848    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:41.900661    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:46.903066    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:46.903376    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:46.927498    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:46.927624    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:46.943374    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:46.943467    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:46.956510    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:46.956595    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:46.968762    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:46.968844    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:46.980752    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:46.980820    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:46.992490    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:46.992563    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:47.003414    5191 logs.go:282] 0 containers: []
	W1204 12:59:47.003424    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:47.003480    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:47.014816    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:47.014854    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:47.014863    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:47.019941    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:47.019951    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:47.034073    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:47.034084    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:47.053940    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:47.053949    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:47.070465    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:47.070478    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:47.089047    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:47.089059    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:47.102618    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:47.102629    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:47.129712    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:47.129729    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:47.142920    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:47.142931    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:47.180962    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:47.180976    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:47.198785    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:47.198797    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:47.216911    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:47.216926    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:47.255741    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:47.255752    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:47.271778    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:47.271791    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:47.287145    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:47.287163    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:49.802208    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:54.803125    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:54.803298    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:54.814298    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:54.814383    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:54.825341    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:54.825419    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:54.837761    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:54.837835    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:54.848659    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:54.848731    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:54.858995    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:54.859082    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:54.870113    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:54.870190    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:54.881171    5191 logs.go:282] 0 containers: []
	W1204 12:59:54.881184    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:54.881252    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:54.892667    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:54.892687    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:54.892694    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:54.930258    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:54.930275    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:54.947703    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:54.947718    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:54.961305    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:54.961320    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:54.977255    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:54.977264    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:54.989864    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:54.989876    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:55.002394    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:55.002405    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:55.015824    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:55.015836    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:55.041318    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:55.041333    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:55.046662    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:55.046670    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:55.085218    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:55.085229    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:55.104393    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:55.104401    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:55.117116    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:55.117133    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:55.129903    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:55.129915    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:55.151288    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:55.151300    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:57.666780    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:02.669282    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:02.669531    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:02.693353    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:02.693490    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:02.712453    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:02.712573    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:02.727156    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:02.727249    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:02.737884    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:02.737968    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:02.751225    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:02.751312    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:02.763841    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:02.763938    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:02.775486    5191 logs.go:282] 0 containers: []
	W1204 13:00:02.775500    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:02.775577    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:02.785895    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:02.785914    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:02.785920    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:02.823649    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:02.823660    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:02.840097    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:02.840109    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:02.859303    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:02.859317    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:02.871658    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:02.871670    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:02.884521    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:02.884533    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:02.898882    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:02.898894    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:02.914514    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:02.914528    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:02.949712    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:02.949727    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:02.962057    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:02.962070    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:02.975307    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:02.975321    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:03.000791    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:03.000803    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:03.006657    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:03.006666    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:03.021972    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:03.021983    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:03.036837    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:03.036845    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:05.552088    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:10.554363    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:10.554567    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:10.568054    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:10.568146    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:10.580019    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:10.580100    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:10.590898    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:10.590983    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:10.601672    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:10.601752    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:10.615836    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:10.615911    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:10.626277    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:10.626356    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:10.636771    5191 logs.go:282] 0 containers: []
	W1204 13:00:10.636781    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:10.636854    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:10.647134    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:10.647149    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:10.647155    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:10.659486    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:10.659500    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:10.676866    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:10.676877    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:10.688821    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:10.688835    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:10.706870    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:10.706885    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:10.726719    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:10.726728    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:10.741142    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:10.741156    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:10.756981    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:10.756996    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:10.769768    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:10.769780    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:10.795535    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:10.795552    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:10.808094    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:10.808110    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:10.851104    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:10.851116    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:10.889634    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:10.889648    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:10.903053    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:10.903066    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:10.919330    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:10.919345    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:13.426865    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:18.429562    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:18.430062    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:18.462895    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:18.463045    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:18.490632    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:18.490727    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:18.504003    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:18.504093    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:18.515339    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:18.515414    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:18.525798    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:18.525886    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:18.536596    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:18.536701    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:18.547179    5191 logs.go:282] 0 containers: []
	W1204 13:00:18.547189    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:18.547249    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:18.558627    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:18.558642    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:18.558647    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:18.570892    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:18.570903    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:18.596545    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:18.596554    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:18.601282    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:18.601289    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:18.612825    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:18.612838    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:18.624538    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:18.624549    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:18.636154    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:18.636166    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:18.650660    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:18.650668    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:18.666769    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:18.666780    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:18.679492    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:18.679505    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:18.695907    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:18.695923    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:18.733265    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:18.733277    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:18.748387    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:18.748400    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:18.766729    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:18.766743    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:18.780716    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:18.780730    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:21.318303    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:26.320541    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:26.320657    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:26.332242    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:26.332323    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:26.344303    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:26.344385    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:26.356501    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:26.356582    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:26.373382    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:26.373447    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:26.384672    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:26.384747    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:26.395464    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:26.395538    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:26.405969    5191 logs.go:282] 0 containers: []
	W1204 13:00:26.406007    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:26.406075    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:26.417536    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:26.417550    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:26.417555    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:26.429920    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:26.429931    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:26.471044    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:26.471054    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:26.484818    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:26.484831    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:26.502418    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:26.502430    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:26.515763    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:26.515775    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:26.550858    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:26.550877    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:26.555973    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:26.555985    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:26.571562    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:26.571574    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:26.584359    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:26.584374    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:26.596918    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:26.596929    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:26.612246    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:26.612257    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:26.627681    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:26.627698    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:26.643203    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:26.643215    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:26.661239    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:26.661251    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:29.187720    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:34.190053    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:34.190248    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:34.202453    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:34.202592    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:34.213068    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:34.213153    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:34.223544    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:34.223626    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:34.233724    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:34.233799    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:34.244496    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:34.244570    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:34.255356    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:34.255435    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:34.265676    5191 logs.go:282] 0 containers: []
	W1204 13:00:34.265688    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:34.265759    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:34.275861    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:34.275880    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:34.275887    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:34.310903    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:34.310914    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:34.324624    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:34.324635    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:34.340218    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:34.340229    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:34.364235    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:34.364245    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:34.398180    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:34.398189    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:34.412981    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:34.412996    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:34.432639    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:34.432649    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:34.444671    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:34.444682    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:34.456984    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:34.456996    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:34.474869    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:34.474880    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:34.486341    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:34.486350    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:34.491333    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:34.491341    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:34.503195    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:34.503208    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:34.515022    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:34.515036    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:37.029274    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:42.031549    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:42.031711    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:42.045711    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:42.045798    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:42.056836    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:42.056907    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:42.067882    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:42.067965    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:42.078499    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:42.078575    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:42.089440    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:42.089514    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:42.100111    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:42.100190    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:42.110206    5191 logs.go:282] 0 containers: []
	W1204 13:00:42.110217    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:42.110285    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:42.120827    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:42.120845    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:42.120851    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:42.156584    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:42.156600    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:42.168763    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:42.168774    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:42.181912    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:42.181923    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:42.196672    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:42.196683    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:42.208549    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:42.208560    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:42.224462    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:42.224476    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:42.242618    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:42.242630    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:42.254501    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:42.254512    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:42.280124    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:42.280143    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:42.285136    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:42.285144    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:42.320730    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:42.320741    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:42.334909    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:42.334921    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:42.349275    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:42.349287    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:42.364438    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:42.364452    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:44.878389    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:49.880797    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:49.881022    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:49.897389    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:49.897478    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:49.909648    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:49.909726    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:49.921026    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:49.921102    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:49.933088    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:49.933171    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:49.949964    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:49.950044    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:49.960507    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:49.960582    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:49.971361    5191 logs.go:282] 0 containers: []
	W1204 13:00:49.971374    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:49.971437    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:49.982155    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:49.982172    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:49.982179    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:50.018931    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:50.018947    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:50.031949    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:50.031959    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:50.044922    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:50.044933    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:50.057336    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:50.057347    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:50.069256    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:50.069265    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:50.074423    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:50.074429    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:50.088512    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:50.088521    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:50.101185    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:50.101194    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:50.124065    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:50.124072    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:50.158704    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:50.158721    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:50.180275    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:50.180285    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:50.191933    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:50.191945    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:50.207219    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:50.207233    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:50.218614    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:50.218624    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:52.739087    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:57.741558    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:57.746927    5191 out.go:201] 
	W1204 13:00:57.749953    5191 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1204 13:00:57.749989    5191 out.go:270] * 
	* 
	W1204 13:00:57.752185    5191 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:00:57.760798    5191 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-728000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-12-04 13:00:57.885752 -0800 PST m=+4148.088624834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-728000 -n running-upgrade-728000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-728000 -n running-upgrade-728000: exit status 2 (15.60969925s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-728000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-883000          | force-systemd-flag-883000 | jenkins | v1.34.0 | 04 Dec 24 12:51 PST |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-825000              | force-systemd-env-825000  | jenkins | v1.34.0 | 04 Dec 24 12:51 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-825000           | force-systemd-env-825000  | jenkins | v1.34.0 | 04 Dec 24 12:51 PST | 04 Dec 24 12:51 PST |
	| start   | -p docker-flags-227000                | docker-flags-227000       | jenkins | v1.34.0 | 04 Dec 24 12:51 PST |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-883000             | force-systemd-flag-883000 | jenkins | v1.34.0 | 04 Dec 24 12:51 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-883000          | force-systemd-flag-883000 | jenkins | v1.34.0 | 04 Dec 24 12:51 PST | 04 Dec 24 12:51 PST |
	| start   | -p cert-expiration-420000             | cert-expiration-420000    | jenkins | v1.34.0 | 04 Dec 24 12:51 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-227000 ssh               | docker-flags-227000       | jenkins | v1.34.0 | 04 Dec 24 12:51 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-227000 ssh               | docker-flags-227000       | jenkins | v1.34.0 | 04 Dec 24 12:51 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-227000                | docker-flags-227000       | jenkins | v1.34.0 | 04 Dec 24 12:51 PST | 04 Dec 24 12:51 PST |
	| start   | -p cert-options-655000                | cert-options-655000       | jenkins | v1.34.0 | 04 Dec 24 12:51 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-655000 ssh               | cert-options-655000       | jenkins | v1.34.0 | 04 Dec 24 12:51 PST |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-655000 -- sudo        | cert-options-655000       | jenkins | v1.34.0 | 04 Dec 24 12:51 PST |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-655000                | cert-options-655000       | jenkins | v1.34.0 | 04 Dec 24 12:51 PST | 04 Dec 24 12:51 PST |
	| start   | -p running-upgrade-728000             | minikube                  | jenkins | v1.26.0 | 04 Dec 24 12:51 PST | 04 Dec 24 12:52 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-728000             | running-upgrade-728000    | jenkins | v1.34.0 | 04 Dec 24 12:52 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-420000             | cert-expiration-420000    | jenkins | v1.34.0 | 04 Dec 24 12:54 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-420000             | cert-expiration-420000    | jenkins | v1.34.0 | 04 Dec 24 12:54 PST | 04 Dec 24 12:54 PST |
	| start   | -p kubernetes-upgrade-617000          | kubernetes-upgrade-617000 | jenkins | v1.34.0 | 04 Dec 24 12:54 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-617000          | kubernetes-upgrade-617000 | jenkins | v1.34.0 | 04 Dec 24 12:54 PST | 04 Dec 24 12:54 PST |
	| start   | -p kubernetes-upgrade-617000          | kubernetes-upgrade-617000 | jenkins | v1.34.0 | 04 Dec 24 12:54 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-617000          | kubernetes-upgrade-617000 | jenkins | v1.34.0 | 04 Dec 24 12:54 PST | 04 Dec 24 12:54 PST |
	| start   | -p stopped-upgrade-827000             | minikube                  | jenkins | v1.26.0 | 04 Dec 24 12:54 PST | 04 Dec 24 12:55 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-827000 stop           | minikube                  | jenkins | v1.26.0 | 04 Dec 24 12:55 PST | 04 Dec 24 12:55 PST |
	| start   | -p stopped-upgrade-827000             | stopped-upgrade-827000    | jenkins | v1.34.0 | 04 Dec 24 12:55 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 12:55:46
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 12:55:46.147159    5382 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:55:46.147320    5382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:55:46.147324    5382 out.go:358] Setting ErrFile to fd 2...
	I1204 12:55:46.147327    5382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:55:46.147502    5382 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:55:46.148665    5382 out.go:352] Setting JSON to false
	I1204 12:55:46.170185    5382 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5117,"bootTime":1733340629,"procs":578,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:55:46.170266    5382 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:55:46.174058    5382 out.go:177] * [stopped-upgrade-827000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:55:46.182003    5382 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:55:46.182026    5382 notify.go:220] Checking for updates...
	I1204 12:55:46.188891    5382 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:55:46.192890    5382 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:55:46.196790    5382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:55:46.199985    5382 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:55:46.202992    5382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:55:46.206235    5382 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 12:55:46.209952    5382 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1204 12:55:46.212937    5382 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:55:46.215991    5382 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 12:55:46.222897    5382 start.go:297] selected driver: qemu2
	I1204 12:55:46.222903    5382 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63857 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 12:55:46.222951    5382 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:55:46.226009    5382 cni.go:84] Creating CNI manager for ""
	I1204 12:55:46.226043    5382 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:55:46.226074    5382 start.go:340] cluster config:
	{Name:stopped-upgrade-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63857 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 12:55:46.226134    5382 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:55:46.234991    5382 out.go:177] * Starting "stopped-upgrade-827000" primary control-plane node in "stopped-upgrade-827000" cluster
	I1204 12:55:46.237909    5382 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 12:55:46.237925    5382 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1204 12:55:46.237935    5382 cache.go:56] Caching tarball of preloaded images
	I1204 12:55:46.238011    5382 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:55:46.238022    5382 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1204 12:55:46.238071    5382 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/config.json ...
	I1204 12:55:46.238686    5382 start.go:360] acquireMachinesLock for stopped-upgrade-827000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:55:46.238739    5382 start.go:364] duration metric: took 40.625µs to acquireMachinesLock for "stopped-upgrade-827000"
	I1204 12:55:46.238748    5382 start.go:96] Skipping create...Using existing machine configuration
	I1204 12:55:46.238752    5382 fix.go:54] fixHost starting: 
	I1204 12:55:46.238874    5382 fix.go:112] recreateIfNeeded on stopped-upgrade-827000: state=Stopped err=<nil>
	W1204 12:55:46.238882    5382 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 12:55:46.244890    5382 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-827000" ...
	I1204 12:55:46.361092    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:55:46.361312    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:55:46.382399    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:55:46.382509    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:55:46.405951    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:55:46.406031    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:55:46.422573    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:55:46.422658    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:55:46.442804    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:55:46.442890    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:55:46.459340    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:55:46.459429    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:55:46.470717    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:55:46.470813    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:55:46.486622    5191 logs.go:282] 0 containers: []
	W1204 12:55:46.486635    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:55:46.486706    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:55:46.497683    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:55:46.497705    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:55:46.497714    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:55:46.518279    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:55:46.518292    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:55:46.542210    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:55:46.542228    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:55:46.554583    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:55:46.554597    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:55:46.559317    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:55:46.559326    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:55:46.605288    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:55:46.605301    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:55:46.619169    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:55:46.619180    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:55:46.635634    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:55:46.635646    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:55:46.652976    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:55:46.652989    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:55:46.664635    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:55:46.664650    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:55:46.678029    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:55:46.678042    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:55:46.690120    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:55:46.690132    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:55:46.727773    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:55:46.727781    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:55:46.741377    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:55:46.741387    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:55:46.752889    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:55:46.752902    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:55:46.770063    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:55:46.770075    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:55:49.286383    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:55:46.248931    5382 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:55:46.249029    5382 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/qemu.pid -nic user,model=virtio,hostfwd=tcp::63822-:22,hostfwd=tcp::63823-:2376,hostname=stopped-upgrade-827000 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/disk.qcow2
	I1204 12:55:46.296885    5382 main.go:141] libmachine: STDOUT: 
	I1204 12:55:46.296913    5382 main.go:141] libmachine: STDERR: 
	I1204 12:55:46.296923    5382 main.go:141] libmachine: Waiting for VM to start (ssh -p 63822 docker@127.0.0.1)...
	I1204 12:55:54.289320    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:55:54.289507    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:55:54.301656    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:55:54.301751    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:55:54.312780    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:55:54.312874    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:55:54.323561    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:55:54.323640    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:55:54.334523    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:55:54.334609    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:55:54.345840    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:55:54.345920    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:55:54.357325    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:55:54.357403    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:55:54.368339    5191 logs.go:282] 0 containers: []
	W1204 12:55:54.368358    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:55:54.368424    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:55:54.379410    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:55:54.379428    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:55:54.379433    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:55:54.417622    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:55:54.417629    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:55:54.429091    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:55:54.429103    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:55:54.449343    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:55:54.449355    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:55:54.461637    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:55:54.461648    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:55:54.481197    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:55:54.481208    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:55:54.501006    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:55:54.501017    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:55:54.515595    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:55:54.515607    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:55:54.527211    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:55:54.527220    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:55:54.564891    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:55:54.564907    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:55:54.577081    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:55:54.577091    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:55:54.590756    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:55:54.590766    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:55:54.616035    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:55:54.616043    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:55:54.620482    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:55:54.620487    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:55:54.634688    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:55:54.634700    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:55:54.649434    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:55:54.649443    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:55:57.166078    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:02.168473    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:02.168825    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:02.196463    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:02.196598    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:02.214040    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:02.214141    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:02.227419    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:02.227529    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:02.239651    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:02.239728    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:02.251490    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:02.251590    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:02.262132    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:02.262216    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:02.272239    5191 logs.go:282] 0 containers: []
	W1204 12:56:02.272258    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:02.272320    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:02.282873    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:02.282889    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:02.282894    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:02.294728    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:02.294738    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:02.305744    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:02.305756    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:02.319375    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:02.319384    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:02.333497    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:02.333509    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:02.345017    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:02.345034    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:02.349393    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:02.349400    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:02.363209    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:02.363225    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:02.384485    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:02.384497    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:02.396379    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:02.396390    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:02.419658    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:02.419669    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:02.437471    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:02.437482    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:02.452306    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:02.452316    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:02.486730    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:02.486744    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:02.504603    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:02.504617    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:02.515918    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:02.515932    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:05.394323    5382 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/config.json ...
	I1204 12:56:05.394709    5382 machine.go:93] provisionDockerMachine start ...
	I1204 12:56:05.394805    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.395061    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.395068    5382 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 12:56:05.463797    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 12:56:05.463814    5382 buildroot.go:166] provisioning hostname "stopped-upgrade-827000"
	I1204 12:56:05.463888    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.464002    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.464009    5382 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-827000 && echo "stopped-upgrade-827000" | sudo tee /etc/hostname
	I1204 12:56:05.532721    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-827000
	
	I1204 12:56:05.532778    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.532884    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.532892    5382 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-827000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-827000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-827000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 12:56:05.603121    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 12:56:05.603134    5382 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19985-1334/.minikube CaCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19985-1334/.minikube}
	I1204 12:56:05.603150    5382 buildroot.go:174] setting up certificates
	I1204 12:56:05.603154    5382 provision.go:84] configureAuth start
	I1204 12:56:05.603161    5382 provision.go:143] copyHostCerts
	I1204 12:56:05.603239    5382 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem, removing ...
	I1204 12:56:05.603248    5382 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem
	I1204 12:56:05.603349    5382 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem (1123 bytes)
	I1204 12:56:05.603534    5382 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem, removing ...
	I1204 12:56:05.603539    5382 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem
	I1204 12:56:05.603594    5382 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem (1679 bytes)
	I1204 12:56:05.603708    5382 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem, removing ...
	I1204 12:56:05.603714    5382 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem
	I1204 12:56:05.603792    5382 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem (1082 bytes)
	I1204 12:56:05.603885    5382 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-827000 san=[127.0.0.1 localhost minikube stopped-upgrade-827000]
	I1204 12:56:05.772546    5382 provision.go:177] copyRemoteCerts
	I1204 12:56:05.772615    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 12:56:05.772627    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	I1204 12:56:05.809385    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 12:56:05.816568    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1204 12:56:05.823297    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 12:56:05.830215    5382 provision.go:87] duration metric: took 227.049292ms to configureAuth
	I1204 12:56:05.830224    5382 buildroot.go:189] setting minikube options for container-runtime
	I1204 12:56:05.830328    5382 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 12:56:05.830384    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.830476    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.830482    5382 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 12:56:05.899607    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 12:56:05.899616    5382 buildroot.go:70] root file system type: tmpfs
	I1204 12:56:05.899670    5382 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 12:56:05.899730    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.899847    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.899881    5382 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 12:56:05.968168    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 12:56:05.968242    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.968352    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.968362    5382 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 12:56:06.349184    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 12:56:06.349199    5382 machine.go:96] duration metric: took 954.472417ms to provisionDockerMachine
	I1204 12:56:06.349206    5382 start.go:293] postStartSetup for "stopped-upgrade-827000" (driver="qemu2")
	I1204 12:56:06.349213    5382 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 12:56:06.349281    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 12:56:06.349291    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	I1204 12:56:06.385410    5382 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 12:56:06.386817    5382 info.go:137] Remote host: Buildroot 2021.02.12
	I1204 12:56:06.386825    5382 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19985-1334/.minikube/addons for local assets ...
	I1204 12:56:06.386918    5382 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19985-1334/.minikube/files for local assets ...
	I1204 12:56:06.387063    5382 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem -> 18562.pem in /etc/ssl/certs
	I1204 12:56:06.387228    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 12:56:06.390134    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem --> /etc/ssl/certs/18562.pem (1708 bytes)
	I1204 12:56:06.397344    5382 start.go:296] duration metric: took 48.1325ms for postStartSetup
	I1204 12:56:06.397359    5382 fix.go:56] duration metric: took 20.158358875s for fixHost
	I1204 12:56:06.397403    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:06.397510    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:06.397518    5382 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 12:56:06.463388    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733345766.381559087
	
	I1204 12:56:06.463403    5382 fix.go:216] guest clock: 1733345766.381559087
	I1204 12:56:06.463407    5382 fix.go:229] Guest: 2024-12-04 12:56:06.381559087 -0800 PST Remote: 2024-12-04 12:56:06.39736 -0800 PST m=+20.282119751 (delta=-15.800913ms)
	I1204 12:56:06.463417    5382 fix.go:200] guest clock delta is within tolerance: -15.800913ms
	I1204 12:56:06.463420    5382 start.go:83] releasing machines lock for "stopped-upgrade-827000", held for 20.22442825s
	I1204 12:56:06.463506    5382 ssh_runner.go:195] Run: cat /version.json
	I1204 12:56:06.463517    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	I1204 12:56:06.463507    5382 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 12:56:06.463546    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	W1204 12:56:06.464274    5382 sshutil.go:64] dial failure (will retry): dial tcp [::1]:63822: connect: connection refused
	I1204 12:56:06.464300    5382 retry.go:31] will retry after 331.644316ms: dial tcp [::1]:63822: connect: connection refused
	W1204 12:56:06.859274    5382 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1204 12:56:06.859479    5382 ssh_runner.go:195] Run: systemctl --version
	I1204 12:56:06.864068    5382 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 12:56:06.867449    5382 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 12:56:06.867533    5382 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1204 12:56:06.874264    5382 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1204 12:56:06.883205    5382 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 12:56:06.883226    5382 start.go:495] detecting cgroup driver to use...
	I1204 12:56:06.883348    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 12:56:06.894750    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1204 12:56:06.899287    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 12:56:06.903094    5382 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 12:56:06.903133    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 12:56:06.906938    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 12:56:06.910686    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 12:56:06.914337    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 12:56:06.917967    5382 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 12:56:06.921394    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 12:56:06.924533    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 12:56:06.927463    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 12:56:06.930691    5382 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 12:56:06.933981    5382 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 12:56:06.937565    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:07.002203    5382 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 12:56:07.008547    5382 start.go:495] detecting cgroup driver to use...
	I1204 12:56:07.008622    5382 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 12:56:07.014175    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 12:56:07.019310    5382 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 12:56:07.027024    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 12:56:07.032280    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 12:56:07.036559    5382 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 12:56:07.058395    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 12:56:07.063368    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 12:56:07.068614    5382 ssh_runner.go:195] Run: which cri-dockerd
	I1204 12:56:07.069838    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 12:56:07.072518    5382 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1204 12:56:07.077463    5382 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 12:56:07.166285    5382 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 12:56:07.254266    5382 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 12:56:07.254326    5382 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 12:56:07.260004    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:07.316359    5382 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 12:56:08.466825    5382 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.150432958s)
	I1204 12:56:08.466900    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 12:56:08.471321    5382 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 12:56:08.477833    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 12:56:08.483154    5382 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 12:56:08.563153    5382 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 12:56:08.626965    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:08.705935    5382 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 12:56:08.712126    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 12:56:08.716552    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:08.795409    5382 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 12:56:08.836194    5382 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 12:56:08.836293    5382 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 12:56:08.839996    5382 start.go:563] Will wait 60s for crictl version
	I1204 12:56:08.840059    5382 ssh_runner.go:195] Run: which crictl
	I1204 12:56:08.841347    5382 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 12:56:08.857075    5382 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1204 12:56:08.857158    5382 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 12:56:08.874297    5382 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 12:56:05.056889    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:08.894427    5382 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1204 12:56:08.894593    5382 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1204 12:56:08.895879    5382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 12:56:08.899407    5382 kubeadm.go:883] updating cluster {Name:stopped-upgrade-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63857 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1204 12:56:08.899449    5382 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 12:56:08.899499    5382 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 12:56:08.909561    5382 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 12:56:08.909569    5382 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 12:56:08.909631    5382 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 12:56:08.913423    5382 ssh_runner.go:195] Run: which lz4
	I1204 12:56:08.914873    5382 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 12:56:08.916196    5382 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 12:56:08.916207    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1204 12:56:09.866144    5382 docker.go:653] duration metric: took 951.297584ms to copy over tarball
	I1204 12:56:09.866225    5382 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 12:56:11.070090    5382 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.203829666s)
	I1204 12:56:11.070112    5382 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 12:56:11.086536    5382 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 12:56:11.090263    5382 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1204 12:56:11.095174    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:10.059650    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:10.059766    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:10.077395    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:10.077529    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:10.089823    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:10.089905    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:10.101897    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:10.101996    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:10.113450    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:10.113534    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:10.125188    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:10.125266    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:10.137077    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:10.137155    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:10.147718    5191 logs.go:282] 0 containers: []
	W1204 12:56:10.147728    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:10.147797    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:10.164840    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:10.164865    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:10.164872    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:10.180136    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:10.180150    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:10.201165    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:10.201189    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:10.246747    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:10.246760    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:10.264827    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:10.264840    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:10.277719    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:10.277733    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:10.291654    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:10.291668    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:10.312101    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:10.312124    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:10.326765    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:10.326778    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:10.342758    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:10.342776    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:10.356387    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:10.356400    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:10.370458    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:10.370472    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:10.410203    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:10.410228    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:10.416223    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:10.416234    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:10.442665    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:10.442686    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:10.459162    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:10.459180    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:12.974313    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:11.175185    5382 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 12:56:12.610954    5382 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.435709167s)
	I1204 12:56:12.611079    5382 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 12:56:12.625590    5382 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 12:56:12.625601    5382 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 12:56:12.625607    5382 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 12:56:12.629884    5382 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:12.631612    5382 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:56:12.633716    5382 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1204 12:56:12.633871    5382 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:12.636171    5382 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:56:12.636344    5382 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:56:12.638101    5382 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1204 12:56:12.638175    5382 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:56:12.639367    5382 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:56:12.639550    5382 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:56:12.640789    5382 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:56:12.641398    5382 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:56:12.642213    5382 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:56:12.642297    5382 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1204 12:56:12.643360    5382 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:56:12.644315    5382 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	W1204 12:56:13.256819    5382 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1204 12:56:13.256971    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:56:13.264758    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1204 12:56:13.271451    5382 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1204 12:56:13.271485    5382 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:56:13.271562    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:56:13.273031    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:56:13.280470    5382 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1204 12:56:13.280497    5382 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1204 12:56:13.280568    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1204 12:56:13.292280    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1204 12:56:13.292306    5382 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1204 12:56:13.292337    5382 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:56:13.292388    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:56:13.292428    5382 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1204 12:56:13.299049    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1204 12:56:13.299189    5382 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1204 12:56:13.306500    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:56:13.308192    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1204 12:56:13.308236    5382 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1204 12:56:13.308250    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1204 12:56:13.308258    5382 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1204 12:56:13.308268    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1204 12:56:13.333760    5382 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1204 12:56:13.333785    5382 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:56:13.333853    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:56:13.338184    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:56:13.349723    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1204 12:56:13.387402    5382 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1204 12:56:13.387424    5382 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:56:13.387498    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:56:13.410646    5382 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1204 12:56:13.410663    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1204 12:56:13.449753    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1204 12:56:13.470007    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:56:13.485472    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1204 12:56:13.520443    5382 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1204 12:56:13.520451    5382 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1204 12:56:13.520477    5382 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:56:13.520543    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:56:13.551240    5382 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1204 12:56:13.551262    5382 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1204 12:56:13.551269    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1204 12:56:13.551327    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W1204 12:56:13.565207    5382 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1204 12:56:13.565337    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:13.594503    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1204 12:56:13.594655    5382 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1204 12:56:13.596393    5382 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1204 12:56:13.596418    5382 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:13.596477    5382 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:13.611929    5382 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1204 12:56:13.611967    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1204 12:56:13.638401    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 12:56:13.638545    5382 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 12:56:13.649959    5382 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1204 12:56:13.649984    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1204 12:56:13.651634    5382 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1204 12:56:13.651662    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1204 12:56:13.716737    5382 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1204 12:56:13.716760    5382 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1204 12:56:13.716768    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1204 12:56:13.848026    5382 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1204 12:56:13.848048    5382 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 12:56:13.848056    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1204 12:56:14.084155    5382 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 12:56:14.084197    5382 cache_images.go:92] duration metric: took 1.458565209s to LoadCachedImages
	W1204 12:56:14.084238    5382 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1204 12:56:14.084243    5382 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1204 12:56:14.084290    5382 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-827000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 12:56:14.084362    5382 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1204 12:56:14.098149    5382 cni.go:84] Creating CNI manager for ""
	I1204 12:56:14.098161    5382 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:56:14.098170    5382 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 12:56:14.098182    5382 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-827000 NodeName:stopped-upgrade-827000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 12:56:14.098253    5382 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-827000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 12:56:14.098321    5382 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1204 12:56:14.101625    5382 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 12:56:14.101665    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 12:56:14.104403    5382 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1204 12:56:14.109263    5382 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 12:56:14.114254    5382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1204 12:56:14.119946    5382 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1204 12:56:14.121073    5382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 12:56:14.124484    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:14.209855    5382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 12:56:14.221066    5382 certs.go:68] Setting up /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000 for IP: 10.0.2.15
	I1204 12:56:14.221077    5382 certs.go:194] generating shared ca certs ...
	I1204 12:56:14.221085    5382 certs.go:226] acquiring lock for ca certs: {Name:mk686f72a960a82dacaf4c130e092ac78361d077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:14.221273    5382 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.key
	I1204 12:56:14.221552    5382 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.key
	I1204 12:56:14.221559    5382 certs.go:256] generating profile certs ...
	I1204 12:56:14.221805    5382 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/client.key
	I1204 12:56:14.221821    5382 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key.fdd81b32
	I1204 12:56:14.221830    5382 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt.fdd81b32 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1204 12:56:14.384596    5382 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt.fdd81b32 ...
	I1204 12:56:14.384610    5382 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt.fdd81b32: {Name:mkb02dddabe8308f2532bcf99f1dd0c86932dd1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:14.384947    5382 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key.fdd81b32 ...
	I1204 12:56:14.384952    5382 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key.fdd81b32: {Name:mk59f57f124d79495880c414dd717ad3ede2f670 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:14.385123    5382 certs.go:381] copying /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt.fdd81b32 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt
	I1204 12:56:14.385267    5382 certs.go:385] copying /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key.fdd81b32 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key
	I1204 12:56:14.385661    5382 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/proxy-client.key
	I1204 12:56:14.385868    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856.pem (1338 bytes)
	W1204 12:56:14.386063    5382 certs.go:480] ignoring /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856_empty.pem, impossibly tiny 0 bytes
	I1204 12:56:14.386074    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 12:56:14.386104    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem (1082 bytes)
	I1204 12:56:14.386126    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem (1123 bytes)
	I1204 12:56:14.386151    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem (1679 bytes)
	I1204 12:56:14.386201    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem (1708 bytes)
	I1204 12:56:14.386553    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 12:56:14.393574    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 12:56:14.400101    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 12:56:14.407199    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 12:56:14.414647    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 12:56:14.422210    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 12:56:14.429303    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 12:56:14.436685    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 12:56:14.443440    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 12:56:14.450269    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856.pem --> /usr/share/ca-certificates/1856.pem (1338 bytes)
	I1204 12:56:14.457763    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem --> /usr/share/ca-certificates/18562.pem (1708 bytes)
	I1204 12:56:14.464510    5382 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 12:56:14.469607    5382 ssh_runner.go:195] Run: openssl version
	I1204 12:56:14.471497    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 12:56:14.474543    5382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:56:14.475962    5382 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:56:14.475992    5382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:56:14.477746    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 12:56:14.480886    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1856.pem && ln -fs /usr/share/ca-certificates/1856.pem /etc/ssl/certs/1856.pem"
	I1204 12:56:14.483860    5382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1856.pem
	I1204 12:56:14.485168    5382 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:00 /usr/share/ca-certificates/1856.pem
	I1204 12:56:14.485200    5382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1856.pem
	I1204 12:56:14.486904    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1856.pem /etc/ssl/certs/51391683.0"
	I1204 12:56:14.490311    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18562.pem && ln -fs /usr/share/ca-certificates/18562.pem /etc/ssl/certs/18562.pem"
	I1204 12:56:14.493667    5382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18562.pem
	I1204 12:56:14.495054    5382 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:00 /usr/share/ca-certificates/18562.pem
	I1204 12:56:14.495095    5382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18562.pem
	I1204 12:56:14.497197    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18562.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 12:56:14.500146    5382 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 12:56:14.501478    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 12:56:14.503642    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 12:56:14.505457    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 12:56:14.508024    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 12:56:14.509777    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 12:56:14.511536    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 12:56:14.513374    5382 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63857 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 12:56:14.513445    5382 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 12:56:14.523829    5382 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 12:56:14.527015    5382 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 12:56:14.527024    5382 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 12:56:14.527058    5382 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 12:56:14.529968    5382 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 12:56:14.530270    5382 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-827000" does not appear in /Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:56:14.530368    5382 kubeconfig.go:62] /Users/jenkins/minikube-integration/19985-1334/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-827000" cluster setting kubeconfig missing "stopped-upgrade-827000" context setting]
	I1204 12:56:14.530570    5382 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/kubeconfig: {Name:mk18d42ed20876d07306ef2e0f2006c5dc1a1320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:14.531016    5382 kapi.go:59] client config for stopped-upgrade-827000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/client.key", CAFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10452b740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 12:56:14.531525    5382 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 12:56:14.534306    5382 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-827000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1204 12:56:14.534311    5382 kubeadm.go:1160] stopping kube-system containers ...
	I1204 12:56:14.534360    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 12:56:14.544904    5382 docker.go:483] Stopping containers: [7b2edfde1470 62e56b454444 7a4a4f7d1323 01a8a4e18f3f 3d1d1ce7fee7 67a82ac16594 3d3df5af7004 58290a52fcff]
	I1204 12:56:14.544973    5382 ssh_runner.go:195] Run: docker stop 7b2edfde1470 62e56b454444 7a4a4f7d1323 01a8a4e18f3f 3d1d1ce7fee7 67a82ac16594 3d3df5af7004 58290a52fcff
	I1204 12:56:14.555330    5382 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 12:56:14.561217    5382 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 12:56:14.563988    5382 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 12:56:14.563997    5382 kubeadm.go:157] found existing configuration files:
	
	I1204 12:56:14.564027    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/admin.conf
	I1204 12:56:14.566545    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 12:56:14.566575    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 12:56:14.569567    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/kubelet.conf
	I1204 12:56:14.572242    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 12:56:14.572268    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 12:56:14.574775    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/controller-manager.conf
	I1204 12:56:14.577870    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 12:56:14.577896    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 12:56:14.580829    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/scheduler.conf
	I1204 12:56:14.583426    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 12:56:14.583470    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 12:56:14.586559    5382 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 12:56:14.589899    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:56:14.614414    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:56:15.093500    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:56:15.218791    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:56:15.242966    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:56:15.263080    5382 api_server.go:52] waiting for apiserver process to appear ...
	I1204 12:56:15.263168    5382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:56:15.765197    5382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:56:17.976536    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:17.976673    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:17.988960    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:17.989053    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:18.000366    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:18.000453    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:18.012500    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:18.012707    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:18.024183    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:18.024249    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:18.038936    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:18.039003    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:18.049820    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:18.049885    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:18.060375    5191 logs.go:282] 0 containers: []
	W1204 12:56:18.060387    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:18.060447    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:18.071202    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:18.071216    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:18.071221    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:18.085192    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:18.085202    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:18.104105    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:18.104121    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:18.116308    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:18.116318    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:18.128660    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:18.128693    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:18.133716    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:18.133723    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:18.148415    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:18.148424    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:18.163216    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:18.163225    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:18.175807    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:18.175817    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:18.194051    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:18.194063    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:18.209193    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:18.209208    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:18.232074    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:18.232090    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:18.245146    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:18.245158    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:18.283321    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:18.283341    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:18.320142    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:18.320154    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:18.338045    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:18.338056    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:16.265228    5382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:56:16.269621    5382 api_server.go:72] duration metric: took 1.006529625s to wait for apiserver process to appear ...
	I1204 12:56:16.269631    5382 api_server.go:88] waiting for apiserver healthz status ...
	I1204 12:56:16.269645    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:20.852795    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:21.271798    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:21.271833    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:25.855139    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:25.855287    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:25.867541    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:25.867624    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:25.878443    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:25.878523    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:25.889746    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:25.889825    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:25.901167    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:25.901235    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:25.911960    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:25.912035    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:25.923273    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:25.923344    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:25.945896    5191 logs.go:282] 0 containers: []
	W1204 12:56:25.945911    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:25.945981    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:25.956279    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:25.956298    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:25.956305    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:25.970524    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:25.970538    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:25.981657    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:25.981669    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:25.993378    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:25.993391    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:26.031531    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:26.031541    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:26.048628    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:26.048638    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:26.062840    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:26.062850    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:26.077384    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:26.077395    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:26.088298    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:26.088310    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:26.112522    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:26.112530    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:26.124550    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:26.124584    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:26.142098    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:26.142107    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:26.153724    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:26.153733    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:26.158024    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:26.158031    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:26.192373    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:26.192384    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:26.204987    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:26.204997    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:28.721665    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:26.272191    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:26.272221    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:33.723920    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:33.724035    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:33.757794    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:33.757879    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:33.776654    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:33.776740    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:33.788609    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:33.788689    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:33.799496    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:33.799575    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:33.810624    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:33.810700    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:33.821394    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:33.821477    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:33.832121    5191 logs.go:282] 0 containers: []
	W1204 12:56:33.832135    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:33.832204    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:33.843143    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:33.843160    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:33.843166    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:33.861809    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:33.861823    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:33.879752    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:33.879766    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:33.914686    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:33.914700    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:33.927116    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:33.927127    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:33.939970    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:33.939983    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:33.962849    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:33.962858    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:33.973796    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:33.973809    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:33.985663    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:33.985676    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:34.022113    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:34.022128    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:34.026494    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:34.026503    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:34.040108    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:34.040121    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:34.052592    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:34.052606    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:34.066614    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:34.066624    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:34.080703    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:34.080716    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:34.095181    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:34.095195    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:31.272660    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:31.272684    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:36.609300    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:36.273279    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:36.273375    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:41.611720    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:41.611918    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:56:41.626028    5191 logs.go:282] 2 containers: [952e6b922394 f670be475b38]
	I1204 12:56:41.626118    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:56:41.637948    5191 logs.go:282] 2 containers: [2c4624f8f6cb 499812ae8462]
	I1204 12:56:41.638021    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:56:41.653284    5191 logs.go:282] 1 containers: [0539a5d1e00c]
	I1204 12:56:41.653367    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:56:41.664102    5191 logs.go:282] 2 containers: [6549b4eea5dd 70fe93d0207d]
	I1204 12:56:41.664164    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:56:41.674622    5191 logs.go:282] 1 containers: [1ac0dd0fc9cd]
	I1204 12:56:41.674686    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:56:41.685786    5191 logs.go:282] 2 containers: [777de47bab99 c87f5d60400f]
	I1204 12:56:41.685855    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:56:41.696749    5191 logs.go:282] 0 containers: []
	W1204 12:56:41.696763    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:56:41.696830    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:56:41.707706    5191 logs.go:282] 1 containers: [da9e11e274e9]
	I1204 12:56:41.707723    5191 logs.go:123] Gathering logs for etcd [2c4624f8f6cb] ...
	I1204 12:56:41.707729    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4624f8f6cb"
	I1204 12:56:41.723286    5191 logs.go:123] Gathering logs for kube-controller-manager [c87f5d60400f] ...
	I1204 12:56:41.723299    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87f5d60400f"
	I1204 12:56:41.741335    5191 logs.go:123] Gathering logs for kube-apiserver [952e6b922394] ...
	I1204 12:56:41.741350    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 952e6b922394"
	I1204 12:56:41.756004    5191 logs.go:123] Gathering logs for coredns [0539a5d1e00c] ...
	I1204 12:56:41.756017    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0539a5d1e00c"
	I1204 12:56:41.767215    5191 logs.go:123] Gathering logs for kube-scheduler [70fe93d0207d] ...
	I1204 12:56:41.767225    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70fe93d0207d"
	I1204 12:56:41.781612    5191 logs.go:123] Gathering logs for kube-proxy [1ac0dd0fc9cd] ...
	I1204 12:56:41.781621    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ac0dd0fc9cd"
	I1204 12:56:41.793368    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:56:41.793381    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:56:41.805015    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:56:41.805027    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:56:41.841859    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:56:41.841871    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:56:41.875805    5191 logs.go:123] Gathering logs for storage-provisioner [da9e11e274e9] ...
	I1204 12:56:41.875821    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da9e11e274e9"
	I1204 12:56:41.888950    5191 logs.go:123] Gathering logs for etcd [499812ae8462] ...
	I1204 12:56:41.888961    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 499812ae8462"
	I1204 12:56:41.906577    5191 logs.go:123] Gathering logs for kube-controller-manager [777de47bab99] ...
	I1204 12:56:41.906588    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777de47bab99"
	I1204 12:56:41.924484    5191 logs.go:123] Gathering logs for kube-scheduler [6549b4eea5dd] ...
	I1204 12:56:41.924497    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6549b4eea5dd"
	I1204 12:56:41.939444    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:56:41.939460    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:56:41.963736    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:56:41.963746    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:56:41.968572    5191 logs.go:123] Gathering logs for kube-apiserver [f670be475b38] ...
	I1204 12:56:41.968580    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f670be475b38"
	I1204 12:56:44.486190    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:41.274445    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:41.274479    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:49.488652    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:49.488699    5191 kubeadm.go:597] duration metric: took 4m4.397901333s to restartPrimaryControlPlane
	W1204 12:56:49.488740    5191 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 12:56:49.488759    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1204 12:56:50.488608    5191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 12:56:50.494058    5191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 12:56:50.497093    5191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 12:56:50.500025    5191 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 12:56:50.500031    5191 kubeadm.go:157] found existing configuration files:
	
	I1204 12:56:50.500061    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/admin.conf
	I1204 12:56:50.502730    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 12:56:50.502766    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 12:56:50.505505    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/kubelet.conf
	I1204 12:56:50.508692    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 12:56:50.508730    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 12:56:50.511859    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/controller-manager.conf
	I1204 12:56:50.514765    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 12:56:50.514798    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 12:56:50.517340    5191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/scheduler.conf
	I1204 12:56:50.520527    5191 kubeadm.go:163] "https://control-plane.minikube.internal:63639" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:63639 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 12:56:50.520560    5191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 12:56:50.524103    5191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 12:56:50.544213    5191 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1204 12:56:50.544258    5191 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 12:56:50.603998    5191 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 12:56:50.604084    5191 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 12:56:50.604194    5191 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 12:56:50.653559    5191 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 12:56:50.657608    5191 out.go:235]   - Generating certificates and keys ...
	I1204 12:56:50.657639    5191 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 12:56:50.657665    5191 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 12:56:50.657703    5191 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 12:56:50.657733    5191 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 12:56:50.657773    5191 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 12:56:50.657804    5191 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 12:56:50.657839    5191 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 12:56:50.657871    5191 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 12:56:50.657911    5191 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 12:56:50.657946    5191 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 12:56:50.657972    5191 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 12:56:50.658002    5191 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 12:56:50.717960    5191 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 12:56:50.801884    5191 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 12:56:50.836210    5191 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 12:56:50.876747    5191 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 12:56:50.909501    5191 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 12:56:50.909889    5191 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 12:56:50.909936    5191 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 12:56:50.998369    5191 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 12:56:46.275539    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:46.275580    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:51.002294    5191 out.go:235]   - Booting up control plane ...
	I1204 12:56:51.002335    5191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 12:56:51.002372    5191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 12:56:51.002410    5191 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 12:56:51.002456    5191 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 12:56:51.010916    5191 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 12:56:51.277242    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:51.277265    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:55.513022    5191 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501964 seconds
	I1204 12:56:55.513083    5191 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 12:56:55.516421    5191 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 12:56:56.038013    5191 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 12:56:56.038414    5191 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-728000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 12:56:56.542066    5191 kubeadm.go:310] [bootstrap-token] Using token: 6zki70.26reqbzbfpvltfx2
	I1204 12:56:56.547777    5191 out.go:235]   - Configuring RBAC rules ...
	I1204 12:56:56.547840    5191 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 12:56:56.547885    5191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 12:56:56.549946    5191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 12:56:56.555187    5191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 12:56:56.556303    5191 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 12:56:56.557118    5191 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 12:56:56.562647    5191 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 12:56:56.737376    5191 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 12:56:56.946436    5191 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 12:56:56.946845    5191 kubeadm.go:310] 
	I1204 12:56:56.946881    5191 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 12:56:56.946913    5191 kubeadm.go:310] 
	I1204 12:56:56.946949    5191 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 12:56:56.946976    5191 kubeadm.go:310] 
	I1204 12:56:56.946992    5191 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 12:56:56.947032    5191 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 12:56:56.947057    5191 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 12:56:56.947059    5191 kubeadm.go:310] 
	I1204 12:56:56.947107    5191 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 12:56:56.947112    5191 kubeadm.go:310] 
	I1204 12:56:56.947133    5191 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 12:56:56.947139    5191 kubeadm.go:310] 
	I1204 12:56:56.947165    5191 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 12:56:56.947202    5191 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 12:56:56.947241    5191 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 12:56:56.947245    5191 kubeadm.go:310] 
	I1204 12:56:56.947290    5191 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 12:56:56.947358    5191 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 12:56:56.947394    5191 kubeadm.go:310] 
	I1204 12:56:56.947434    5191 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6zki70.26reqbzbfpvltfx2 \
	I1204 12:56:56.947484    5191 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7d8c9ff99071ccd6c2c996325e17b7e464f4a0a980b55e37863d1d8ca70e7d83 \
	I1204 12:56:56.947495    5191 kubeadm.go:310] 	--control-plane 
	I1204 12:56:56.947498    5191 kubeadm.go:310] 
	I1204 12:56:56.947544    5191 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 12:56:56.947552    5191 kubeadm.go:310] 
	I1204 12:56:56.947592    5191 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6zki70.26reqbzbfpvltfx2 \
	I1204 12:56:56.947644    5191 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7d8c9ff99071ccd6c2c996325e17b7e464f4a0a980b55e37863d1d8ca70e7d83 
	I1204 12:56:56.947705    5191 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 12:56:56.947716    5191 cni.go:84] Creating CNI manager for ""
	I1204 12:56:56.947727    5191 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:56:56.951535    5191 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 12:56:56.956530    5191 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 12:56:56.959788    5191 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 12:56:56.965746    5191 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 12:56:56.965830    5191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 12:56:56.965857    5191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-728000 minikube.k8s.io/updated_at=2024_12_04T12_56_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=running-upgrade-728000 minikube.k8s.io/primary=true
	I1204 12:56:57.006662    5191 ops.go:34] apiserver oom_adj: -16
	I1204 12:56:57.006661    5191 kubeadm.go:1113] duration metric: took 40.901042ms to wait for elevateKubeSystemPrivileges
	I1204 12:56:57.006676    5191 kubeadm.go:394] duration metric: took 4m11.931105333s to StartCluster
	I1204 12:56:57.006686    5191 settings.go:142] acquiring lock: {Name:mkc9bc1437987e3de306bb25e3c2f4effe0b8b57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:57.006789    5191 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:56:57.007201    5191 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/kubeconfig: {Name:mk18d42ed20876d07306ef2e0f2006c5dc1a1320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:57.007405    5191 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:56:57.007416    5191 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 12:56:57.007453    5191 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-728000"
	I1204 12:56:57.007456    5191 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-728000"
	I1204 12:56:57.007466    5191 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-728000"
	W1204 12:56:57.007471    5191 addons.go:243] addon storage-provisioner should already be in state true
	I1204 12:56:57.007485    5191 host.go:66] Checking if "running-upgrade-728000" exists ...
	I1204 12:56:57.007467    5191 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-728000"
	I1204 12:56:57.007597    5191 config.go:182] Loaded profile config "running-upgrade-728000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 12:56:57.008473    5191 kapi.go:59] client config for running-upgrade-728000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/running-upgrade-728000/client.key", CAFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102317740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 12:56:57.008781    5191 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-728000"
	W1204 12:56:57.008786    5191 addons.go:243] addon default-storageclass should already be in state true
	I1204 12:56:57.008793    5191 host.go:66] Checking if "running-upgrade-728000" exists ...
	I1204 12:56:57.011596    5191 out.go:177] * Verifying Kubernetes components...
	I1204 12:56:57.012004    5191 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 12:56:57.015637    5191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 12:56:57.015644    5191 sshutil.go:53] new ssh client: &{IP:localhost Port:63607 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/running-upgrade-728000/id_rsa Username:docker}
	I1204 12:56:57.019469    5191 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:57.023533    5191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:57.026561    5191 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 12:56:57.026567    5191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 12:56:57.026573    5191 sshutil.go:53] new ssh client: &{IP:localhost Port:63607 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/running-upgrade-728000/id_rsa Username:docker}
	I1204 12:56:57.116127    5191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 12:56:57.121858    5191 api_server.go:52] waiting for apiserver process to appear ...
	I1204 12:56:57.121909    5191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:56:57.126108    5191 api_server.go:72] duration metric: took 118.68875ms to wait for apiserver process to appear ...
	I1204 12:56:57.126115    5191 api_server.go:88] waiting for apiserver healthz status ...
	I1204 12:56:57.126123    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:57.158685    5191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 12:56:57.172950    5191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 12:56:57.512988    5191 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 12:56:57.513001    5191 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 12:56:56.278896    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:56.278980    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:02.128254    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:02.128297    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:01.281407    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:01.281430    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:07.128659    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:07.128692    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:06.283408    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:06.283466    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:12.129071    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:12.129104    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:11.284140    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:11.284160    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:17.129554    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:17.129581    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:16.286410    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:16.286646    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:16.303009    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:16.303103    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:16.316291    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:16.316374    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:16.327446    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:16.327526    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:16.338476    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:16.338562    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:16.349319    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:16.349395    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:16.359896    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:16.359982    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:16.373286    5382 logs.go:282] 0 containers: []
	W1204 12:57:16.373298    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:16.373361    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:16.383153    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:16.383171    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:16.383176    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:16.394193    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:16.394206    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:16.435862    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:16.435874    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:16.451391    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:16.451400    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:16.465288    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:16.465297    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:16.480609    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:16.480624    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:16.498090    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:16.498099    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:16.513213    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:16.513228    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:16.524787    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:16.524799    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:16.540932    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:16.540947    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:16.545161    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:16.545167    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:16.556607    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:16.556627    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:16.570185    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:16.570195    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:16.581992    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:16.582002    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:16.595227    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:16.595237    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:16.620337    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:16.620346    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:16.656458    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:16.656464    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:19.267631    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:22.130193    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:22.130239    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:24.269958    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:24.270122    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:24.287182    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:24.287265    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:24.299033    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:24.299121    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:24.309659    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:24.309731    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:24.320147    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:24.320223    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:24.330434    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:24.330499    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:24.342062    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:24.342142    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:24.352046    5382 logs.go:282] 0 containers: []
	W1204 12:57:24.352057    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:24.352119    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:24.362496    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:24.362513    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:24.362518    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:24.381400    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:24.381412    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:24.392945    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:24.392955    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:24.417965    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:24.417972    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:24.431506    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:24.431518    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:24.446264    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:24.446277    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:24.458159    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:24.458173    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:24.463067    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:24.463074    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:24.476937    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:24.476952    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:24.518323    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:24.518333    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:24.532719    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:24.532730    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:24.548806    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:24.548821    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:24.586641    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:24.586652    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:24.630817    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:24.630827    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:24.647189    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:24.647201    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:24.658651    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:24.658666    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:24.669736    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:24.669746    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:27.131217    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:27.131265    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1204 12:57:27.515775    5191 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1204 12:57:27.519647    5191 out.go:177] * Enabled addons: storage-provisioner
	I1204 12:57:27.531511    5191 addons.go:510] duration metric: took 30.523720375s for enable addons: enabled=[storage-provisioner]
	I1204 12:57:27.183377    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:32.132370    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:32.132437    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:32.185742    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:32.185954    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:32.206594    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:32.206702    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:32.221515    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:32.221599    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:32.233660    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:32.233741    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:32.244788    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:32.244868    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:32.255105    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:32.255180    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:32.265154    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:32.265221    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:32.275510    5382 logs.go:282] 0 containers: []
	W1204 12:57:32.275526    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:32.275589    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:32.288558    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:32.288577    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:32.288582    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:32.302217    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:32.302228    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:32.319366    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:32.319376    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:32.331178    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:32.331190    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:32.343086    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:32.343099    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:32.379443    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:32.379453    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:32.420078    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:32.420091    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:32.435020    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:32.435032    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:32.446764    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:32.446776    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:32.458631    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:32.458644    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:32.472989    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:32.473000    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:32.484423    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:32.484433    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:32.522169    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:32.522181    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:32.533774    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:32.533786    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:32.548876    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:32.548887    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:32.552989    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:32.552997    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:32.571727    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:32.571739    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:35.097161    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:37.133901    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:37.133963    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:40.099426    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:40.099582    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:40.115030    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:40.115123    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:40.126897    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:40.126970    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:40.137516    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:40.137593    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:40.150084    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:40.150170    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:40.160743    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:40.160821    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:40.171055    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:40.171122    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:40.181362    5382 logs.go:282] 0 containers: []
	W1204 12:57:40.181374    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:40.181433    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:40.192077    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:40.192093    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:40.192098    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:40.203651    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:40.203663    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:40.207922    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:40.207928    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:40.245063    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:40.245075    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:40.258937    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:40.258946    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:40.280689    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:40.280707    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:40.292643    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:40.292658    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:40.308912    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:40.308923    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:40.347844    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:40.347866    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:40.364544    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:40.364555    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:40.375967    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:40.375976    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:40.414958    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:40.414972    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:40.428347    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:40.428357    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:40.446055    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:40.446064    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:40.472083    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:40.472090    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:40.486737    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:40.486747    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:40.498256    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:40.498267    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:42.135607    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:42.135671    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:43.018995    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:47.137744    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:47.137791    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:48.021435    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:48.021641    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:48.037971    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:48.038066    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:48.051228    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:48.051309    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:48.062102    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:48.062183    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:48.077966    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:48.078053    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:48.088885    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:48.088966    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:48.099260    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:48.099341    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:48.109516    5382 logs.go:282] 0 containers: []
	W1204 12:57:48.109530    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:48.109594    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:48.119752    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:48.119768    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:48.119773    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:48.131577    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:48.131588    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:48.145579    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:48.145592    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:48.158331    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:48.158341    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:48.170706    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:48.170718    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:48.181896    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:48.181906    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:48.196692    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:48.196707    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:48.220018    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:48.220028    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:48.223927    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:48.223936    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:48.241613    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:48.241625    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:48.283742    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:48.283752    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:48.322246    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:48.322257    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:48.342168    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:48.342179    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:48.356485    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:48.356494    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:48.368754    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:48.368769    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:48.402928    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:48.402937    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:48.418147    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:48.418159    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:50.938435    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:52.140087    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:52.140115    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:55.940931    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:55.941390    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:55.971330    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:55.971484    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:55.989325    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:55.989444    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:56.003213    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:56.003295    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:56.015182    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:56.015265    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:56.025697    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:56.025775    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:56.036646    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:56.036727    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:56.047478    5382 logs.go:282] 0 containers: []
	W1204 12:57:56.047489    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:56.047557    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:56.058449    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:56.058468    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:56.058475    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:56.070360    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:56.070374    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:56.086586    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:56.086598    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:56.098293    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:56.098306    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:56.122741    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:56.122756    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:57.142387    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:57.142527    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:57.154864    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:57:57.154938    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:57.165397    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:57:57.165477    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:57.175984    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:57:57.176065    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:57.187420    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:57:57.187504    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:57.199013    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:57:57.199110    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:57.209045    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:57:57.209116    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:57.219339    5191 logs.go:282] 0 containers: []
	W1204 12:57:57.219351    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:57.219415    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:57.229862    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:57:57.229877    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:57.229883    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:57.254042    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:57.254052    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:57.287538    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:57:57.287549    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:57:57.301525    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:57:57.301535    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:57:57.312733    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:57:57.312746    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:57:57.326999    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:57:57.327011    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:57:57.348438    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:57:57.348449    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:57:57.359826    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:57.359836    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:57.364500    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:57.364507    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:57.401529    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:57:57.401543    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:57:57.415594    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:57:57.415607    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:57:57.427300    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:57:57.427313    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:57:57.439422    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:57:57.439435    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:59.952779    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:56.158446    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:56.158457    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:56.173115    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:56.173125    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:56.188127    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:56.188145    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:56.199751    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:56.199762    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:56.217127    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:56.217137    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:56.255843    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:56.255857    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:56.293027    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:56.293042    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:56.307231    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:56.307245    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:56.311404    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:56.311410    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:56.329538    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:56.329549    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:56.341526    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:56.341537    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:56.356129    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:56.356144    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:58.870330    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:04.955106    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:04.955222    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:04.968071    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:04.968158    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:04.978863    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:04.978935    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:04.989200    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:04.989265    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:04.999975    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:05.000052    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:05.010431    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:05.010511    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:03.872769    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:03.873254    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:03.920286    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:03.920409    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:03.936103    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:03.936200    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:03.948475    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:03.948560    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:03.959645    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:03.959730    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:03.970138    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:03.970213    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:03.980756    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:03.980831    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:03.991297    5382 logs.go:282] 0 containers: []
	W1204 12:58:03.991312    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:03.991380    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:04.001906    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:04.001922    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:04.001929    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:04.023273    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:04.023284    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:04.038249    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:04.038261    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:04.042823    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:04.042830    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:04.078248    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:04.078262    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:04.092753    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:04.092765    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:04.106940    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:04.106951    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:04.118620    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:04.118634    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:04.143596    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:04.143607    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:04.157353    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:04.157363    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:04.194972    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:04.194983    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:04.232726    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:04.232737    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:04.244621    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:04.244637    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:04.257510    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:04.257524    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:04.275405    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:04.275414    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:04.290483    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:04.290493    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:04.302238    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:04.302253    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:05.022178    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:05.022252    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:05.032834    5191 logs.go:282] 0 containers: []
	W1204 12:58:05.032847    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:05.032914    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:05.043629    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:05.043647    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:05.043653    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:05.057575    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:05.057586    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:05.070051    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:05.070061    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:05.081983    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:05.081995    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:05.097129    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:05.097138    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:05.108820    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:05.108832    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:05.127478    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:05.127490    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:05.162525    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:05.162536    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:05.167565    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:05.167572    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:05.204203    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:05.204215    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:05.222980    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:05.222992    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:05.235159    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:05.235171    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:05.254437    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:05.254448    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:07.782550    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:06.818322    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:12.784990    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:12.785162    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:12.801585    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:12.801681    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:12.814431    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:12.814507    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:12.825350    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:12.825426    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:12.837692    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:12.837771    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:12.848876    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:12.848954    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:12.859704    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:12.859780    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:12.870153    5191 logs.go:282] 0 containers: []
	W1204 12:58:12.870164    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:12.870225    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:12.880787    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:12.880803    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:12.880809    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:12.896507    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:12.896521    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:12.921634    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:12.921642    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:12.933713    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:12.933725    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:12.938963    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:12.938970    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:12.953567    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:12.953578    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:12.964757    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:12.964768    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:12.977245    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:12.977257    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:12.990054    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:12.990067    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:13.024324    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:13.024336    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:13.064187    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:13.064198    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:13.078674    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:13.078687    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:13.093358    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:13.093370    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:11.821134    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:11.821355    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:11.841153    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:11.841260    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:11.855327    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:11.855414    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:11.867658    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:11.867741    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:11.878508    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:11.878590    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:11.889019    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:11.889090    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:11.900564    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:11.900632    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:11.913775    5382 logs.go:282] 0 containers: []
	W1204 12:58:11.913787    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:11.913870    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:11.924486    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:11.924508    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:11.924514    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:11.960060    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:11.960075    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:11.977254    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:11.977271    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:11.990646    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:11.990656    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:12.006248    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:12.006259    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:12.019582    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:12.019593    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:12.031106    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:12.031118    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:12.035305    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:12.035312    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:12.048946    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:12.048957    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:12.072544    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:12.072552    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:12.084391    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:12.084402    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:12.121390    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:12.121398    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:12.135070    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:12.135100    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:12.176741    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:12.176751    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:12.194583    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:12.194593    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:12.206853    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:12.206863    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:12.223923    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:12.223934    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:14.739797    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:15.613335    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:19.742282    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:19.742487    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:19.756323    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:19.756423    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:19.767813    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:19.767890    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:19.778092    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:19.778172    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:19.788745    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:19.788820    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:19.799162    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:19.799250    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:19.809490    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:19.809567    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:19.819355    5382 logs.go:282] 0 containers: []
	W1204 12:58:19.819369    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:19.819433    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:19.830066    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:19.830085    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:19.830090    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:19.845063    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:19.845076    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:19.859780    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:19.859794    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:19.871528    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:19.871538    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:19.908243    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:19.908251    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:19.941899    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:19.941910    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:19.981497    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:19.981510    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:20.006445    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:20.006457    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:20.024480    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:20.024491    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:20.036552    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:20.036565    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:20.052226    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:20.052240    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:20.064139    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:20.064150    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:20.078061    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:20.078072    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:20.102145    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:20.102153    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:20.106647    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:20.106654    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:20.118015    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:20.118025    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:20.133376    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:20.133386    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:20.615743    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:20.615922    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:20.631540    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:20.631626    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:20.644665    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:20.644748    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:20.655099    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:20.655169    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:20.667722    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:20.667804    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:20.680357    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:20.680431    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:20.690728    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:20.690797    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:20.701235    5191 logs.go:282] 0 containers: []
	W1204 12:58:20.701247    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:20.701311    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:20.711375    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:20.711395    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:20.711401    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:20.746670    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:20.746681    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:20.760486    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:20.760499    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:20.771884    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:20.771897    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:20.783504    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:20.783518    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:20.795518    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:20.795533    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:20.799939    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:20.799948    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:20.816150    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:20.816161    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:20.831738    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:20.831752    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:20.846303    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:20.846317    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:20.857980    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:20.857993    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:20.875162    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:20.875173    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:20.899211    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:20.899223    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:23.434642    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:22.652482    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:28.436948    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:28.437143    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:28.454229    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:28.454326    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:28.467557    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:28.467632    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:28.479081    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:28.479157    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:28.489672    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:28.489747    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:28.508527    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:28.508604    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:28.523927    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:28.524001    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:28.534466    5191 logs.go:282] 0 containers: []
	W1204 12:58:28.534487    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:28.534562    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:28.544958    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:28.544975    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:28.544981    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:28.568851    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:28.568858    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:28.603569    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:28.603577    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:28.617495    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:28.617508    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:28.633315    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:28.633326    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:28.644939    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:28.644949    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:28.660419    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:28.660428    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:28.677857    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:28.677868    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:28.689967    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:28.689979    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:28.703507    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:28.703523    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:28.708397    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:28.708403    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:28.743875    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:28.743892    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:28.759442    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:28.759457    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:27.654860    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:27.655125    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:27.678532    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:27.678659    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:27.694215    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:27.694295    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:27.706833    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:27.706900    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:27.717722    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:27.717790    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:27.728206    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:27.728280    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:27.740212    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:27.740275    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:27.750285    5382 logs.go:282] 0 containers: []
	W1204 12:58:27.750304    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:27.750360    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:27.761669    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:27.761689    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:27.761694    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:27.775551    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:27.775567    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:27.789212    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:27.789226    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:27.803443    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:27.803453    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:27.814801    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:27.814817    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:27.819267    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:27.819275    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:27.853628    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:27.853643    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:27.865357    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:27.865370    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:27.882700    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:27.882711    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:27.904271    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:27.904281    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:27.928390    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:27.928397    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:27.945497    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:27.945508    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:27.984379    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:27.984391    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:28.022570    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:28.022583    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:28.037296    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:28.037309    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:28.059725    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:28.059740    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:28.074820    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:28.074833    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:30.589622    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:31.273302    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:35.592324    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:35.592447    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:35.603415    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:35.603493    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:35.614289    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:35.614363    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:35.624356    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:35.624428    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:35.634494    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:35.634575    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:35.645254    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:35.645324    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:35.655922    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:35.656006    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:35.666605    5382 logs.go:282] 0 containers: []
	W1204 12:58:35.666616    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:35.666675    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:35.676755    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:35.676774    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:35.676779    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:35.698936    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:35.698947    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:35.711030    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:35.711040    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:35.715394    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:35.715400    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:35.729762    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:35.729772    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:35.740820    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:35.740831    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:35.752660    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:35.752671    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:35.789089    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:35.789098    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:35.826178    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:35.826188    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:35.841325    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:35.841336    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:35.856311    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:35.856322    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:35.890202    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:35.890215    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:35.904029    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:35.904039    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:35.916438    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:35.916450    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:35.934023    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:35.934032    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:35.948430    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:35.948440    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:35.959521    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:35.959531    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:36.275612    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:36.275802    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:36.292476    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:36.292571    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:36.311370    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:36.311451    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:36.322429    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:36.322515    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:36.333888    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:36.333963    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:36.345096    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:36.345179    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:36.356227    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:36.356306    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:36.366110    5191 logs.go:282] 0 containers: []
	W1204 12:58:36.366127    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:36.366197    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:36.376546    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:36.376560    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:36.376567    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:36.389331    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:36.389344    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:36.425080    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:36.425092    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:36.463903    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:36.463920    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:36.483757    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:36.483771    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:36.495508    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:36.495522    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:36.507202    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:36.507212    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:36.519439    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:36.519450    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:36.524039    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:36.524046    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:36.539084    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:36.539094    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:36.553168    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:36.553184    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:36.571432    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:36.571443    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:36.583673    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:36.583683    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:39.110595    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:38.485029    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:44.113024    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:44.113166    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:44.125856    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:44.125949    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:44.136548    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:44.136625    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:44.147246    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:44.147333    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:44.158012    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:44.158093    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:44.169007    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:44.169084    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:44.179289    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:44.179363    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:44.189999    5191 logs.go:282] 0 containers: []
	W1204 12:58:44.190012    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:44.190079    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:44.201044    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:44.201063    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:44.201069    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:44.226540    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:44.226551    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:44.262007    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:44.262016    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:44.267006    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:44.267014    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:44.302815    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:44.302825    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:44.314739    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:44.314750    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:44.334506    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:44.334518    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:44.348275    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:44.348286    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:44.360815    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:44.360825    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:44.380956    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:44.380965    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:44.395163    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:44.395174    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:44.406774    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:44.406784    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:44.421213    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:44.421224    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:43.487530    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:43.487884    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:43.515662    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:43.515820    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:43.535165    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:43.535261    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:43.549678    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:43.549764    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:43.561195    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:43.561280    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:43.571692    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:43.571762    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:43.582368    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:43.582438    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:43.600345    5382 logs.go:282] 0 containers: []
	W1204 12:58:43.600356    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:43.600425    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:43.611454    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:43.611472    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:43.611478    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:43.650247    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:43.650255    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:43.664089    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:43.664101    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:43.675517    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:43.675529    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:43.687375    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:43.687389    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:43.705298    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:43.705309    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:43.716495    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:43.716505    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:43.730715    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:43.730728    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:43.745424    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:43.745435    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:43.757331    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:43.757344    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:43.769406    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:43.769417    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:43.773512    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:43.773520    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:43.810208    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:43.810219    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:43.847662    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:43.847673    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:43.859587    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:43.859598    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:43.877013    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:43.877024    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:43.890980    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:43.890994    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:46.935190    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:46.418166    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:51.936866    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:51.937043    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:51.953367    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:51.953441    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:51.963650    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:51.963724    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:51.974786    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:51.974869    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:51.999423    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:51.999498    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:52.010138    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:52.010219    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:52.025286    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:52.025373    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:52.035823    5191 logs.go:282] 0 containers: []
	W1204 12:58:52.035838    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:52.035919    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:52.046633    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:52.046651    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:58:52.046657    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:58:52.061195    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:52.061205    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:52.085231    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:52.085240    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:52.097351    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:52.097361    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:52.121036    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:52.121045    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:52.136064    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:52.136074    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:52.147797    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:52.147810    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:52.182774    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:52.182786    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:52.187732    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:52.187739    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:52.222734    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:58:52.222747    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:58:52.237578    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:52.237589    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:52.255166    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:58:52.255176    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:58:52.267282    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:52.267293    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:54.781429    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:51.420565    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:51.420753    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:51.436577    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:51.436665    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:51.449278    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:51.449352    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:51.459856    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:51.459931    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:51.470970    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:51.471046    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:51.482253    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:51.482325    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:51.492552    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:51.492624    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:51.502467    5382 logs.go:282] 0 containers: []
	W1204 12:58:51.502480    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:51.502572    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:51.512983    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:51.513002    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:51.513008    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:51.527300    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:51.527311    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:51.568565    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:51.568577    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:51.582358    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:51.582367    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:51.605496    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:51.605504    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:51.619751    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:51.619762    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:51.631444    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:51.631454    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:51.646555    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:51.646567    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:51.661998    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:51.662010    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:51.680506    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:51.680521    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:51.696237    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:51.696249    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:51.730235    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:51.730246    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:51.741742    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:51.741753    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:51.759146    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:51.759158    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:51.798190    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:51.798204    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:51.803258    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:51.803268    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:51.821013    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:51.821027    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:54.336562    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:59.783779    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:59.783945    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:59.795343    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:58:59.795427    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:59.805836    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:58:59.805920    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:59.816575    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:58:59.816652    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:59.826892    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:58:59.826960    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:59.837836    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:58:59.837911    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:59.848331    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:58:59.848411    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:59.858565    5191 logs.go:282] 0 containers: []
	W1204 12:58:59.858577    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:59.858640    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:59.868781    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:58:59.868796    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:58:59.868804    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:59.880688    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:59.880703    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:59.915499    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:58:59.915508    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:58:59.927076    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:58:59.927087    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:58:59.939262    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:58:59.939274    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:58:59.953569    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:58:59.953578    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:58:59.965004    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:58:59.965015    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:58:59.982720    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:59.982729    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:59.987717    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:59.987725    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:59.339161    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:59.339295    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:59.351803    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:59.351888    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:59.362422    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:59.362501    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:59.400348    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:59.400429    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:59.413638    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:59.413718    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:59.424024    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:59.424103    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:59.438421    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:59.438493    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:59.448474    5382 logs.go:282] 0 containers: []
	W1204 12:58:59.448484    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:59.448542    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:59.459792    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:59.459814    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:59.459820    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:59.471444    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:59.471458    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:59.483368    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:59.483379    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:59.495442    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:59.495453    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:59.511887    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:59.511897    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:59.547055    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:59.547067    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:59.560992    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:59.561005    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:59.598091    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:59.598102    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:59.609776    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:59.609787    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:59.623579    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:59.623590    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:59.638170    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:59.638184    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:59.649672    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:59.649684    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:59.677213    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:59.677221    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:59.690587    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:59.690599    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:59.730246    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:59.730257    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:59.734615    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:59.734622    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:59.749778    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:59.749791    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:00.028512    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:00.028523    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:00.043420    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:00.043433    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:00.057768    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:00.057780    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:00.069486    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:00.069500    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:02.595448    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:02.269006    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:07.597805    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:07.597899    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:07.610456    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:07.610538    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:07.624142    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:07.624221    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:07.636309    5191 logs.go:282] 2 containers: [8b498b23d661 59434a9b24c5]
	I1204 12:59:07.636384    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:07.647518    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:07.647599    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:07.659365    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:07.659452    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:07.670910    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:07.670987    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:07.681752    5191 logs.go:282] 0 containers: []
	W1204 12:59:07.681763    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:07.681825    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:07.693251    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:07.693266    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:07.693272    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:07.705216    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:07.705228    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:07.720266    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:07.720280    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:07.732940    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:07.732949    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:07.750888    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:07.750898    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:07.774529    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:07.774545    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:07.807583    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:07.807591    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:07.811867    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:07.811873    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:07.847697    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:07.847712    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:07.859347    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:07.859359    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:07.870662    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:07.870677    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:07.885446    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:07.885460    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:07.899499    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:07.899515    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:07.271346    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:07.271530    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:07.284689    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:07.284773    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:07.295892    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:07.295971    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:07.307189    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:07.307264    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:07.318228    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:07.318314    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:07.333483    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:07.333560    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:07.344352    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:07.344421    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:07.356710    5382 logs.go:282] 0 containers: []
	W1204 12:59:07.356721    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:07.356795    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:07.368990    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:07.369007    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:07.369014    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:07.383972    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:07.383987    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:07.396015    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:07.396026    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:07.407065    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:07.407076    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:07.446180    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:07.446192    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:07.482728    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:07.482740    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:07.496396    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:07.496407    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:07.508415    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:07.508426    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:07.528993    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:07.529007    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:07.547774    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:07.547784    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:07.570322    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:07.570331    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:07.581965    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:07.581976    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:07.586595    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:07.586603    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:07.601162    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:07.601171    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:07.618180    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:07.618192    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:07.631358    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:07.631371    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:07.646970    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:07.646983    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:10.189130    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:10.413520    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:15.189540    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:15.189692    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:15.204486    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:15.204576    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:15.216217    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:15.216300    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:15.226926    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:15.226997    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:15.238170    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:15.238247    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:15.248959    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:15.249038    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:15.259225    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:15.259296    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:15.269422    5382 logs.go:282] 0 containers: []
	W1204 12:59:15.269435    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:15.269498    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:15.285470    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:15.285488    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:15.285494    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:15.324099    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:15.324107    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:15.328169    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:15.328176    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:15.342054    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:15.342067    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:15.381936    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:15.381951    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:15.395019    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:15.395030    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:15.411662    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:15.411674    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:15.449513    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:15.449532    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:15.469531    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:15.469547    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:15.482680    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:15.482693    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:15.499787    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:15.499799    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:15.511816    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:15.511829    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:15.535693    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:15.535708    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:15.549451    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:15.549460    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:15.564124    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:15.564134    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:15.577424    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:15.577438    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:15.596418    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:15.596438    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:15.415878    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:15.415981    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:15.427346    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:15.427432    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:15.439066    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:15.439147    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:15.453965    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:15.454074    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:15.472394    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:15.472475    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:15.484211    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:15.484291    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:15.504593    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:15.504678    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:15.515483    5191 logs.go:282] 0 containers: []
	W1204 12:59:15.515496    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:15.515565    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:15.526830    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:15.526847    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:15.526852    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:15.547396    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:15.547409    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:15.562380    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:15.562392    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:15.601099    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:15.601112    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:15.619179    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:15.619193    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:15.630842    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:15.630853    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:15.642859    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:15.642870    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:15.676418    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:15.676430    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:15.688042    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:15.688058    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:15.699533    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:15.699545    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:15.711428    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:15.711439    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:15.723718    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:15.723729    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:15.738844    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:15.738855    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:15.750979    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:15.750991    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:15.774753    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:15.774760    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:18.281729    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:18.116099    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:23.284175    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:23.284275    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:23.295898    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:23.295984    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:23.308212    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:23.308294    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:23.320020    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:23.320102    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:23.331763    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:23.331843    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:23.345752    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:23.345826    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:23.357601    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:23.357680    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:23.369314    5191 logs.go:282] 0 containers: []
	W1204 12:59:23.369323    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:23.369387    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:23.381490    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:23.381507    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:23.381515    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:23.394585    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:23.394600    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:23.421450    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:23.421471    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:23.441373    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:23.441388    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:23.458578    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:23.458595    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:23.472081    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:23.472094    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:23.484863    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:23.484874    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:23.500474    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:23.500485    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:23.520884    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:23.520896    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:23.541025    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:23.541035    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:23.577911    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:23.577934    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:23.618187    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:23.618198    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:23.629389    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:23.629404    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:23.641327    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:23.641337    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:23.666019    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:23.666036    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:23.118593    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:23.118822    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:23.134135    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:23.134233    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:23.145774    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:23.145854    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:23.157499    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:23.157580    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:23.169600    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:23.169684    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:23.181852    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:23.181938    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:23.192762    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:23.192837    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:23.204529    5382 logs.go:282] 0 containers: []
	W1204 12:59:23.204541    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:23.204606    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:23.215112    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:23.215130    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:23.215135    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:23.232394    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:23.232408    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:23.246398    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:23.246411    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:23.285703    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:23.285710    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:23.300788    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:23.300804    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:23.312928    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:23.312942    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:23.325047    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:23.325061    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:23.350277    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:23.350290    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:23.364137    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:23.364150    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:23.368520    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:23.368532    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:23.406098    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:23.406109    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:23.421778    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:23.421787    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:23.437063    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:23.437075    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:23.455944    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:23.455957    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:23.500929    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:23.500943    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:23.515724    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:23.515736    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:23.528135    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:23.528146    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:26.042429    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:26.171871    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:31.044858    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:31.045290    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:31.076977    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:31.077131    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:31.095078    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:31.095184    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:31.109491    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:31.109568    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:31.121418    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:31.121519    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:31.134106    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:31.134185    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:31.148047    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:31.148121    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:31.173086    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:31.173206    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:31.185155    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:31.185260    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:31.196801    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:31.196875    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:31.213673    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:31.213756    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:31.225047    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:31.225125    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:31.235993    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:31.236072    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:31.248919    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:31.249005    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:31.263959    5191 logs.go:282] 0 containers: []
	W1204 12:59:31.263973    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:31.264050    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:31.276062    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:31.276079    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:31.276085    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:31.289488    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:31.289498    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:31.301882    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:31.301892    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:31.317351    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:31.317362    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:31.331732    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:31.331742    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:31.345511    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:31.345524    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:31.384069    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:31.384081    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:31.398711    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:31.398721    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:31.410630    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:31.410641    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:31.423604    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:31.423619    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:31.442349    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:31.442359    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:31.479647    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:31.479661    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:31.494926    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:31.494937    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:31.520975    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:31.520988    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:31.533605    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:31.533620    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:34.039664    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:31.159372    5382 logs.go:282] 0 containers: []
	W1204 12:59:31.159390    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:31.159455    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:31.175852    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:31.175871    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:31.175877    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:31.182166    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:31.182180    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:31.199159    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:31.199168    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:31.215544    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:31.215553    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:31.239873    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:31.239886    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:31.258196    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:31.258211    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:31.273408    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:31.273421    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:31.286298    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:31.286312    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:31.329494    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:31.329507    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:31.367111    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:31.367127    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:31.407001    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:31.407022    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:31.424223    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:31.424234    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:31.436872    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:31.436884    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:31.460387    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:31.460396    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:31.477774    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:31.477786    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:31.495405    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:31.495414    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:31.507562    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:31.507576    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:34.022295    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:39.041933    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:39.042013    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:39.053557    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:39.053639    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:39.065124    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:39.065203    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:39.076655    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:39.076734    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:39.088100    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:39.088175    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:39.099057    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:39.099119    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:39.110521    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:39.110583    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:39.125414    5191 logs.go:282] 0 containers: []
	W1204 12:59:39.125425    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:39.125497    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:39.136692    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:39.136708    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:39.136713    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:39.174666    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:39.174679    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:39.201121    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:39.201132    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:39.220557    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:39.220568    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:39.232851    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:39.232867    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:39.267847    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:39.267859    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:39.280203    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:39.280212    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:39.293328    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:39.293339    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:39.307884    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:39.307896    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:39.320310    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:39.320321    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:39.335641    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:39.335653    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:39.349416    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:39.349428    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:39.361980    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:39.361991    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:39.367423    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:39.367436    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:39.382834    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:39.382848    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:39.024749    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:39.024903    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:39.037862    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:39.037941    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:39.049769    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:39.049852    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:39.061023    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:39.061108    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:39.073756    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:39.073845    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:39.085208    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:39.085287    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:39.097605    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:39.097697    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:39.108693    5382 logs.go:282] 0 containers: []
	W1204 12:59:39.108706    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:39.108815    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:39.121615    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:39.121632    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:39.121637    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:39.136408    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:39.136421    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:39.154156    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:39.154172    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:39.166949    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:39.166963    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:39.187752    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:39.187764    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:39.199984    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:39.199996    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:39.223464    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:39.223478    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:39.236108    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:39.236118    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:39.274160    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:39.274177    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:39.279057    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:39.279070    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:39.294967    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:39.294977    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:39.311988    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:39.312006    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:39.359762    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:39.359778    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:39.373600    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:39.373612    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:39.417335    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:39.417347    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:39.431835    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:39.431845    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:39.443795    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:39.443807    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:41.900661    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:41.959895    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:46.903066    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:46.903376    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:46.927498    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:46.927624    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:46.943374    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:46.943467    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:46.956510    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:46.956595    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:46.968762    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:46.968844    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:46.980752    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:46.980820    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:46.992490    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:46.992563    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:47.003414    5191 logs.go:282] 0 containers: []
	W1204 12:59:47.003424    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:47.003480    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:47.014816    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:47.014854    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:47.014863    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:47.019941    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:47.019951    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:47.034073    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:47.034084    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:47.053940    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:47.053949    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:47.070465    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:47.070478    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:47.089047    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:47.089059    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:47.102618    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:47.102629    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:47.129712    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:47.129729    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:47.142920    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:47.142931    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:47.180962    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:47.180976    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:47.198785    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:47.198797    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:47.216911    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:47.216926    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:47.255741    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:47.255752    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:47.271778    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:47.271791    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:47.287145    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:47.287163    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:49.802208    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:46.962227    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:46.962301    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:46.978531    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:46.978611    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:46.990414    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:46.990498    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:47.001806    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:47.001891    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:47.013695    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:47.013774    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:47.025532    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:47.025613    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:47.037999    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:47.038079    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:47.052598    5382 logs.go:282] 0 containers: []
	W1204 12:59:47.052641    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:47.052714    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:47.066158    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:47.066192    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:47.066203    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:47.106200    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:47.106215    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:47.121309    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:47.121324    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:47.135408    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:47.135425    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:47.158724    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:47.158738    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:47.163529    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:47.163539    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:47.202164    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:47.202176    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:47.222540    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:47.222553    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:47.237280    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:47.237292    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:47.277137    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:47.277154    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:47.292308    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:47.292326    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:47.304997    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:47.305007    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:47.316804    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:47.316818    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:47.328675    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:47.328686    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:47.340686    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:47.340697    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:47.355986    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:47.355999    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:47.367431    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:47.367441    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:49.880776    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:54.803125    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:54.803298    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:54.814298    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 12:59:54.814383    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:54.825341    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 12:59:54.825419    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:54.837761    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 12:59:54.837835    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:54.848659    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 12:59:54.848731    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:54.858995    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 12:59:54.859082    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:54.870113    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 12:59:54.870190    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:54.881171    5191 logs.go:282] 0 containers: []
	W1204 12:59:54.881184    5191 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:54.881252    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:54.892667    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 12:59:54.892687    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:54.892694    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:54.930258    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 12:59:54.930275    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 12:59:54.947703    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 12:59:54.947718    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 12:59:54.961305    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 12:59:54.961320    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 12:59:54.977255    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 12:59:54.977264    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 12:59:54.989864    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 12:59:54.989876    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 12:59:55.002394    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 12:59:55.002405    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 12:59:54.881564    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:54.881618    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:54.893279    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:54.893350    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:54.905033    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:54.905115    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:54.916749    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:54.916828    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:54.927281    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:54.927354    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:54.939183    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:54.939260    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:54.950837    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:54.950921    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:54.961841    5382 logs.go:282] 0 containers: []
	W1204 12:59:54.961860    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:54.961948    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:54.973434    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:54.973449    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:54.973455    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:54.978006    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:54.978014    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:55.015981    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:55.015990    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:55.056634    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:55.056650    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:55.069608    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:55.069621    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:55.085520    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:55.085528    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:55.103392    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:55.103405    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:55.118037    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:55.118048    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:55.132717    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:55.132729    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:55.148332    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:55.148345    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:55.164138    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:55.164151    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:55.176865    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:55.176876    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:55.213953    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:55.213967    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:55.225955    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:55.225968    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:55.243730    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:55.243744    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:55.255193    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:55.255204    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:55.267569    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:55.267580    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:55.015824    5191 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:55.015836    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:55.041318    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:55.041333    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:55.046662    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:55.046670    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:55.085218    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 12:59:55.085229    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 12:59:55.104393    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 12:59:55.104401    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 12:59:55.117116    5191 logs.go:123] Gathering logs for container status ...
	I1204 12:59:55.117133    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:55.129903    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 12:59:55.129915    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 12:59:55.151288    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 12:59:55.151300    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 12:59:57.666780    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:57.791884    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:02.669282    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:02.669531    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:02.693353    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:02.693490    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:02.712453    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:02.712573    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:02.727156    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:02.727249    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:02.737884    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:02.737968    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:02.751225    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:02.751312    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:02.763841    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:02.763938    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:02.775486    5191 logs.go:282] 0 containers: []
	W1204 13:00:02.775500    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:02.775577    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:02.785895    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:02.785914    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:02.785920    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:02.823649    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:02.823660    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:02.840097    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:02.840109    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:02.859303    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:02.859317    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:02.871658    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:02.871670    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:02.884521    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:02.884533    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:02.898882    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:02.898894    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:02.914514    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:02.914528    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:02.949712    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:02.949727    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:02.962057    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:02.962070    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:02.975307    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:02.975321    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:03.000791    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:03.000803    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:03.006657    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:03.006666    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:03.021972    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:03.021983    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:03.036837    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:03.036845    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:02.793108    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:02.793209    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:02.804653    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 13:00:02.804744    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:02.816533    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 13:00:02.816620    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:02.829116    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 13:00:02.829197    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:02.840557    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 13:00:02.840643    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:02.851432    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 13:00:02.851516    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:02.862690    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 13:00:02.862774    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:02.881409    5382 logs.go:282] 0 containers: []
	W1204 13:00:02.881422    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:02.881494    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:02.893801    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 13:00:02.893822    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 13:00:02.893829    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 13:00:02.906460    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 13:00:02.906477    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 13:00:02.922070    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 13:00:02.922083    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 13:00:02.937265    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:02.937276    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:02.941531    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:02.941539    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:02.980863    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 13:00:02.980877    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 13:00:02.993628    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:02.993640    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:03.034789    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 13:00:03.034808    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 13:00:03.073000    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 13:00:03.073012    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 13:00:03.091613    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 13:00:03.091623    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 13:00:03.103509    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 13:00:03.103521    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 13:00:03.120750    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 13:00:03.120761    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 13:00:03.132012    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 13:00:03.132024    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 13:00:03.152850    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 13:00:03.152862    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 13:00:03.168202    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:00:03.168213    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:03.179761    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 13:00:03.179773    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 13:00:03.197327    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:03.197339    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:05.722342    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:05.552088    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:10.723729    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:10.723835    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:10.742646    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 13:00:10.742728    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:10.754594    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 13:00:10.754679    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:10.767236    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 13:00:10.767379    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:10.779597    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 13:00:10.779676    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:10.790895    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 13:00:10.790975    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:10.802930    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 13:00:10.803048    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:10.814666    5382 logs.go:282] 0 containers: []
	W1204 13:00:10.814676    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:10.814743    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:10.826367    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 13:00:10.826385    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:10.826390    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:10.867237    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 13:00:10.867257    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 13:00:10.882668    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 13:00:10.882686    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 13:00:10.897158    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 13:00:10.897173    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 13:00:10.910173    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:10.910185    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:10.915101    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 13:00:10.915113    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 13:00:10.953887    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 13:00:10.953903    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 13:00:10.968009    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 13:00:10.968022    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 13:00:10.983125    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 13:00:10.983135    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 13:00:10.994918    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:00:10.994929    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:11.007164    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 13:00:11.007175    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 13:00:11.022405    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 13:00:11.022416    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 13:00:11.037247    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 13:00:11.037256    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 13:00:11.052864    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 13:00:11.052876    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 13:00:11.071191    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:11.071201    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:11.094314    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:11.094322    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:11.130406    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 13:00:11.130417    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 13:00:10.554363    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:10.554567    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:10.568054    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:10.568146    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:10.580019    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:10.580100    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:10.590898    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:10.590983    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:10.601672    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:10.601752    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:10.615836    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:10.615911    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:10.626277    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:10.626356    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:10.636771    5191 logs.go:282] 0 containers: []
	W1204 13:00:10.636781    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:10.636854    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:10.647134    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:10.647149    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:10.647155    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:10.659486    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:10.659500    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:10.676866    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:10.676877    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:10.688821    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:10.688835    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:10.706870    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:10.706885    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:10.726719    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:10.726728    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:10.741142    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:10.741156    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:10.756981    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:10.756996    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:10.769768    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:10.769780    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:10.795535    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:10.795552    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:10.808094    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:10.808110    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:10.851104    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:10.851116    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:10.889634    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:10.889648    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:10.903053    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:10.903066    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:10.919330    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:10.919345    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:13.426865    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:13.647783    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:18.648203    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:18.648239    5382 kubeadm.go:597] duration metric: took 4m4.118209875s to restartPrimaryControlPlane
	W1204 13:00:18.648268    5382 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 13:00:18.648283    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1204 13:00:19.701447    5382 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.053139667s)
	I1204 13:00:19.701528    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 13:00:19.706325    5382 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 13:00:19.709243    5382 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 13:00:19.711825    5382 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 13:00:19.711832    5382 kubeadm.go:157] found existing configuration files:
	
	I1204 13:00:19.711860    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/admin.conf
	I1204 13:00:19.714630    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 13:00:19.714656    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 13:00:19.717722    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/kubelet.conf
	I1204 13:00:19.720205    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 13:00:19.720235    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 13:00:19.722855    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/controller-manager.conf
	I1204 13:00:19.726065    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 13:00:19.726095    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 13:00:19.729199    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/scheduler.conf
	I1204 13:00:19.731649    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 13:00:19.731674    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 13:00:19.734579    5382 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 13:00:19.751736    5382 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1204 13:00:19.751791    5382 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 13:00:19.805903    5382 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 13:00:19.805962    5382 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 13:00:19.806018    5382 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 13:00:19.855470    5382 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 13:00:18.429562    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:18.430062    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:18.462895    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:18.463045    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:18.490632    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:18.490727    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:18.504003    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:18.504093    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:18.515339    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:18.515414    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:18.525798    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:18.525886    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:18.536596    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:18.536701    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:18.547179    5191 logs.go:282] 0 containers: []
	W1204 13:00:18.547189    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:18.547249    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:18.558627    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:18.558642    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:18.558647    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:18.570892    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:18.570903    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:18.596545    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:18.596554    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:18.601282    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:18.601289    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:18.612825    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:18.612838    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:18.624538    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:18.624549    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:18.636154    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:18.636166    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:18.650660    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:18.650668    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:18.666769    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:18.666780    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:18.679492    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:18.679505    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:18.695907    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:18.695923    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:18.733265    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:18.733277    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:18.748387    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:18.748400    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:18.766729    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:18.766743    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:18.780716    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:18.780730    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:19.859449    5382 out.go:235]   - Generating certificates and keys ...
	I1204 13:00:19.859488    5382 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 13:00:19.859522    5382 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 13:00:19.859563    5382 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 13:00:19.859593    5382 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 13:00:19.859631    5382 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 13:00:19.859671    5382 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 13:00:19.859705    5382 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 13:00:19.859748    5382 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 13:00:19.859785    5382 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 13:00:19.859827    5382 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 13:00:19.859853    5382 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 13:00:19.859889    5382 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 13:00:19.912374    5382 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 13:00:20.104793    5382 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 13:00:20.156622    5382 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 13:00:20.244183    5382 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 13:00:20.273477    5382 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 13:00:20.273885    5382 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 13:00:20.273910    5382 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 13:00:20.361109    5382 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 13:00:20.368053    5382 out.go:235]   - Booting up control plane ...
	I1204 13:00:20.368102    5382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 13:00:20.368157    5382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 13:00:20.368205    5382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 13:00:20.368243    5382 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 13:00:20.368333    5382 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 13:00:21.318303    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:24.864619    5382 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501565 seconds
	I1204 13:00:24.864676    5382 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 13:00:24.869694    5382 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 13:00:25.379939    5382 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 13:00:25.380136    5382 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-827000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 13:00:25.885095    5382 kubeadm.go:310] [bootstrap-token] Using token: pfiqw0.szxm27i1cbji286z
	I1204 13:00:25.889175    5382 out.go:235]   - Configuring RBAC rules ...
	I1204 13:00:25.889237    5382 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 13:00:25.889279    5382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 13:00:25.893072    5382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 13:00:25.893988    5382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 13:00:25.894916    5382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 13:00:25.895699    5382 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 13:00:25.898744    5382 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 13:00:26.069436    5382 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 13:00:26.291324    5382 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 13:00:26.291888    5382 kubeadm.go:310] 
	I1204 13:00:26.291923    5382 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 13:00:26.291927    5382 kubeadm.go:310] 
	I1204 13:00:26.291966    5382 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 13:00:26.291971    5382 kubeadm.go:310] 
	I1204 13:00:26.291983    5382 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 13:00:26.292020    5382 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 13:00:26.292058    5382 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 13:00:26.292066    5382 kubeadm.go:310] 
	I1204 13:00:26.292101    5382 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 13:00:26.292118    5382 kubeadm.go:310] 
	I1204 13:00:26.292145    5382 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 13:00:26.292147    5382 kubeadm.go:310] 
	I1204 13:00:26.292177    5382 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 13:00:26.292217    5382 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 13:00:26.292252    5382 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 13:00:26.292255    5382 kubeadm.go:310] 
	I1204 13:00:26.292302    5382 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 13:00:26.292349    5382 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 13:00:26.292354    5382 kubeadm.go:310] 
	I1204 13:00:26.292401    5382 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pfiqw0.szxm27i1cbji286z \
	I1204 13:00:26.292462    5382 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7d8c9ff99071ccd6c2c996325e17b7e464f4a0a980b55e37863d1d8ca70e7d83 \
	I1204 13:00:26.292475    5382 kubeadm.go:310] 	--control-plane 
	I1204 13:00:26.292479    5382 kubeadm.go:310] 
	I1204 13:00:26.292542    5382 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 13:00:26.292546    5382 kubeadm.go:310] 
	I1204 13:00:26.292583    5382 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pfiqw0.szxm27i1cbji286z \
	I1204 13:00:26.292640    5382 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7d8c9ff99071ccd6c2c996325e17b7e464f4a0a980b55e37863d1d8ca70e7d83 
	I1204 13:00:26.293835    5382 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 13:00:26.293848    5382 cni.go:84] Creating CNI manager for ""
	I1204 13:00:26.293858    5382 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 13:00:26.297798    5382 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 13:00:26.305953    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 13:00:26.309038    5382 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 13:00:26.313855    5382 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 13:00:26.313902    5382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 13:00:26.313928    5382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-827000 minikube.k8s.io/updated_at=2024_12_04T13_00_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=stopped-upgrade-827000 minikube.k8s.io/primary=true
	I1204 13:00:26.362903    5382 kubeadm.go:1113] duration metric: took 49.038375ms to wait for elevateKubeSystemPrivileges
	I1204 13:00:26.362916    5382 ops.go:34] apiserver oom_adj: -16
	I1204 13:00:26.362923    5382 kubeadm.go:394] duration metric: took 4m11.846455375s to StartCluster
	I1204 13:00:26.362934    5382 settings.go:142] acquiring lock: {Name:mkc9bc1437987e3de306bb25e3c2f4effe0b8b57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:00:26.363036    5382 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:00:26.363509    5382 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/kubeconfig: {Name:mk18d42ed20876d07306ef2e0f2006c5dc1a1320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:00:26.363741    5382 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:00:26.363802    5382 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 13:00:26.363857    5382 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-827000"
	I1204 13:00:26.363865    5382 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-827000"
	W1204 13:00:26.363868    5382 addons.go:243] addon storage-provisioner should already be in state true
	I1204 13:00:26.363880    5382 host.go:66] Checking if "stopped-upgrade-827000" exists ...
	I1204 13:00:26.363923    5382 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-827000"
	I1204 13:00:26.363948    5382 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:00:26.363971    5382 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-827000"
	I1204 13:00:26.365131    5382 kapi.go:59] client config for stopped-upgrade-827000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/client.key", CAFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10452b740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 13:00:26.365254    5382 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-827000"
	W1204 13:00:26.365258    5382 addons.go:243] addon default-storageclass should already be in state true
	I1204 13:00:26.365266    5382 host.go:66] Checking if "stopped-upgrade-827000" exists ...
	I1204 13:00:26.367798    5382 out.go:177] * Verifying Kubernetes components...
	I1204 13:00:26.368170    5382 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 13:00:26.371029    5382 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 13:00:26.371035    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	I1204 13:00:26.373772    5382 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 13:00:26.320541    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:26.320657    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:26.332242    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:26.332323    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:26.344303    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:26.344385    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:26.356501    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:26.356582    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:26.373382    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:26.373447    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:26.384672    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:26.384747    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:26.395464    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:26.395538    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:26.405969    5191 logs.go:282] 0 containers: []
	W1204 13:00:26.406007    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:26.406075    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:26.417536    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:26.417550    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:26.417555    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:26.429920    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:26.429931    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:26.471044    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:26.471054    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:26.484818    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:26.484831    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:26.502418    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:26.502430    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:26.515763    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:26.515775    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:26.550858    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:26.550877    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:26.555973    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:26.555985    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:26.571562    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:26.571574    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:26.584359    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:26.584374    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:26.596918    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:26.596929    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:26.612246    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:26.612257    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:26.627681    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:26.627698    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:26.643203    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:26.643215    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:26.661239    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:26.661251    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:29.187720    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:26.377943    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 13:00:26.380764    5382 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 13:00:26.380773    5382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 13:00:26.380781    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	I1204 13:00:26.461665    5382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 13:00:26.467707    5382 api_server.go:52] waiting for apiserver process to appear ...
	I1204 13:00:26.467789    5382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 13:00:26.472638    5382 api_server.go:72] duration metric: took 108.882208ms to wait for apiserver process to appear ...
	I1204 13:00:26.472648    5382 api_server.go:88] waiting for apiserver healthz status ...
	I1204 13:00:26.472658    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:26.479272    5382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 13:00:26.531905    5382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 13:00:26.868217    5382 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 13:00:26.868229    5382 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 13:00:34.190053    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:34.190248    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:34.202453    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:34.202592    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:34.213068    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:34.213153    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:34.223544    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:34.223626    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:34.233724    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:34.233799    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:34.244496    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:34.244570    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:34.255356    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:34.255435    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:34.265676    5191 logs.go:282] 0 containers: []
	W1204 13:00:34.265688    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:34.265759    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:34.275861    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:34.275880    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:34.275887    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:34.310903    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:34.310914    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:34.324624    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:34.324635    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:34.340218    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:34.340229    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:34.364235    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:34.364245    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:34.398180    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:34.398189    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:34.412981    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:34.412996    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:34.432639    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:34.432649    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:34.444671    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:34.444682    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:34.456984    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:34.456996    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:34.474869    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:34.474880    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:34.486341    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:34.486350    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:34.491333    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:34.491341    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:34.503195    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:34.503208    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:34.515022    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:34.515036    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:31.474840    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:31.474909    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:37.029274    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:36.475453    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:36.475474    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:42.031549    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:42.031711    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:42.045711    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:42.045798    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:42.056836    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:42.056907    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:42.067882    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:42.067965    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:42.078499    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:42.078575    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:42.089440    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:42.089514    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:42.100111    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:42.100190    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:42.110206    5191 logs.go:282] 0 containers: []
	W1204 13:00:42.110217    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:42.110285    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:42.120827    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:42.120845    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:42.120851    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:42.156584    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:42.156600    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:42.168763    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:42.168774    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:42.181912    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:42.181923    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:42.196672    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:42.196683    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:42.208549    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:42.208560    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:42.224462    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:42.224476    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:42.242618    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:42.242630    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:42.254501    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:42.254512    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:42.280124    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:42.280143    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:42.285136    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:42.285144    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:42.320730    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:42.320741    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:42.334909    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:42.334921    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:42.349275    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:42.349287    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:42.364438    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:42.364452    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:44.878389    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:41.475928    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:41.475954    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:49.880797    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:49.881022    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:49.897389    5191 logs.go:282] 1 containers: [0fde659cfba5]
	I1204 13:00:49.897478    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:49.909648    5191 logs.go:282] 1 containers: [110541f8fb04]
	I1204 13:00:49.909726    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:49.921026    5191 logs.go:282] 4 containers: [2047ebe266ff c1dcabc606e3 8b498b23d661 59434a9b24c5]
	I1204 13:00:49.921102    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:49.933088    5191 logs.go:282] 1 containers: [552fb3b88163]
	I1204 13:00:49.933171    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:49.949964    5191 logs.go:282] 1 containers: [ab92f2224807]
	I1204 13:00:49.950044    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:49.960507    5191 logs.go:282] 1 containers: [3b044967c881]
	I1204 13:00:49.960582    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:49.971361    5191 logs.go:282] 0 containers: []
	W1204 13:00:49.971374    5191 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:49.971437    5191 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:49.982155    5191 logs.go:282] 1 containers: [e9ace0c60701]
	I1204 13:00:49.982172    5191 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:49.982179    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:46.476534    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:46.476561    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:50.018931    5191 logs.go:123] Gathering logs for coredns [2047ebe266ff] ...
	I1204 13:00:50.018947    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2047ebe266ff"
	I1204 13:00:50.031949    5191 logs.go:123] Gathering logs for coredns [59434a9b24c5] ...
	I1204 13:00:50.031959    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59434a9b24c5"
	I1204 13:00:50.044922    5191 logs.go:123] Gathering logs for storage-provisioner [e9ace0c60701] ...
	I1204 13:00:50.044933    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9ace0c60701"
	I1204 13:00:50.057336    5191 logs.go:123] Gathering logs for container status ...
	I1204 13:00:50.057347    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:50.069256    5191 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:50.069265    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:50.074423    5191 logs.go:123] Gathering logs for etcd [110541f8fb04] ...
	I1204 13:00:50.074429    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 110541f8fb04"
	I1204 13:00:50.088512    5191 logs.go:123] Gathering logs for coredns [c1dcabc606e3] ...
	I1204 13:00:50.088521    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1dcabc606e3"
	I1204 13:00:50.101185    5191 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:50.101194    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:50.124065    5191 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:50.124072    5191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:50.158704    5191 logs.go:123] Gathering logs for kube-apiserver [0fde659cfba5] ...
	I1204 13:00:50.158721    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fde659cfba5"
	I1204 13:00:50.180275    5191 logs.go:123] Gathering logs for coredns [8b498b23d661] ...
	I1204 13:00:50.180285    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b498b23d661"
	I1204 13:00:50.191933    5191 logs.go:123] Gathering logs for kube-scheduler [552fb3b88163] ...
	I1204 13:00:50.191945    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 552fb3b88163"
	I1204 13:00:50.207219    5191 logs.go:123] Gathering logs for kube-proxy [ab92f2224807] ...
	I1204 13:00:50.207233    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab92f2224807"
	I1204 13:00:50.218614    5191 logs.go:123] Gathering logs for kube-controller-manager [3b044967c881] ...
	I1204 13:00:50.218624    5191 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b044967c881"
	I1204 13:00:52.739087    5191 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:51.477319    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:51.477351    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:56.478262    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:56.478287    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1204 13:00:56.870050    5382 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1204 13:00:56.874047    5382 out.go:177] * Enabled addons: storage-provisioner
	I1204 13:00:57.741558    5191 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:57.746927    5191 out.go:201] 
	W1204 13:00:57.749953    5191 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1204 13:00:57.749989    5191 out.go:270] * 
	W1204 13:00:57.752185    5191 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:00:57.760798    5191 out.go:201] 
	I1204 13:00:56.882055    5382 addons.go:510] duration metric: took 30.517914958s for enable addons: enabled=[storage-provisioner]
	I1204 13:01:01.479579    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:01.479619    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:01:06.481118    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:06.481149    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-12-04 20:52:06 UTC, ends at Wed 2024-12-04 21:01:13 UTC. --
	Dec 04 21:00:58 running-upgrade-728000 dockerd[3144]: time="2024-12-04T21:00:58.823566112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 04 21:00:58 running-upgrade-728000 dockerd[3144]: time="2024-12-04T21:00:58.823620320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 04 21:00:58 running-upgrade-728000 dockerd[3144]: time="2024-12-04T21:00:58.823626362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 04 21:00:58 running-upgrade-728000 dockerd[3144]: time="2024-12-04T21:00:58.823797567Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fee7ec259a41d0ee5564986780e584849b9ed0f4d3969cff9a4694cee83639ac pid=18800 runtime=io.containerd.runc.v2
	Dec 04 21:00:59 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:00:59Z" level=error msg="ContainerStats resp: {0x4000671e00 linux}"
	Dec 04 21:00:59 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:00:59Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 04 21:01:00 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:00Z" level=error msg="ContainerStats resp: {0x4000951340 linux}"
	Dec 04 21:01:00 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:00Z" level=error msg="ContainerStats resp: {0x40009515c0 linux}"
	Dec 04 21:01:00 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:00Z" level=error msg="ContainerStats resp: {0x400009d280 linux}"
	Dec 04 21:01:00 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:00Z" level=error msg="ContainerStats resp: {0x400009d900 linux}"
	Dec 04 21:01:00 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:00Z" level=error msg="ContainerStats resp: {0x40003a1a80 linux}"
	Dec 04 21:01:00 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:00Z" level=error msg="ContainerStats resp: {0x40004fe680 linux}"
	Dec 04 21:01:00 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:00Z" level=error msg="ContainerStats resp: {0x4000970900 linux}"
	Dec 04 21:01:04 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:04Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 04 21:01:09 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:09Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 04 21:01:10 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:10Z" level=error msg="ContainerStats resp: {0x400062d300 linux}"
	Dec 04 21:01:10 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:10Z" level=error msg="ContainerStats resp: {0x400089d3c0 linux}"
	Dec 04 21:01:11 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:11Z" level=error msg="ContainerStats resp: {0x40006715c0 linux}"
	Dec 04 21:01:12 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:12Z" level=error msg="ContainerStats resp: {0x40003a1780 linux}"
	Dec 04 21:01:12 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:12Z" level=error msg="ContainerStats resp: {0x40003a1a80 linux}"
	Dec 04 21:01:12 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:12Z" level=error msg="ContainerStats resp: {0x40009703c0 linux}"
	Dec 04 21:01:12 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:12Z" level=error msg="ContainerStats resp: {0x4000950c00 linux}"
	Dec 04 21:01:12 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:12Z" level=error msg="ContainerStats resp: {0x4000951080 linux}"
	Dec 04 21:01:12 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:12Z" level=error msg="ContainerStats resp: {0x4000951480 linux}"
	Dec 04 21:01:12 running-upgrade-728000 cri-dockerd[2985]: time="2024-12-04T21:01:12Z" level=error msg="ContainerStats resp: {0x40009518c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	fee7ec259a41d       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   51700684a17d9
	2e4374bf74856       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   ccfd2faddfed0
	2047ebe266ffb       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   ccfd2faddfed0
	c1dcabc606e30       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   51700684a17d9
	ab92f22248076       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   94ae41c48edeb
	e9ace0c60701c       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   cfbda05ff03e8
	552fb3b88163e       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   0b7c02748a304
	110541f8fb044       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   7bd447a75ab7f
	0fde659cfba59       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   6cdfb2857802c
	3b044967c8816       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   a24126794d047
	
	
	==> coredns [2047ebe266ff] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6423296115527280324.1253112199914202882. HINFO: read udp 10.244.0.2:42016->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6423296115527280324.1253112199914202882. HINFO: read udp 10.244.0.2:40371->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6423296115527280324.1253112199914202882. HINFO: read udp 10.244.0.2:41412->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6423296115527280324.1253112199914202882. HINFO: read udp 10.244.0.2:60208->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6423296115527280324.1253112199914202882. HINFO: read udp 10.244.0.2:50877->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6423296115527280324.1253112199914202882. HINFO: read udp 10.244.0.2:49629->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6423296115527280324.1253112199914202882. HINFO: read udp 10.244.0.2:41138->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6423296115527280324.1253112199914202882. HINFO: read udp 10.244.0.2:46773->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6423296115527280324.1253112199914202882. HINFO: read udp 10.244.0.2:34052->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6423296115527280324.1253112199914202882. HINFO: read udp 10.244.0.2:57541->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2e4374bf7485] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5429814710285082498.5274181280921081647. HINFO: read udp 10.244.0.2:59788->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5429814710285082498.5274181280921081647. HINFO: read udp 10.244.0.2:39956->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5429814710285082498.5274181280921081647. HINFO: read udp 10.244.0.2:47011->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c1dcabc606e3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8442720486019810598.8470997348590270126. HINFO: read udp 10.244.0.3:39428->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8442720486019810598.8470997348590270126. HINFO: read udp 10.244.0.3:41332->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8442720486019810598.8470997348590270126. HINFO: read udp 10.244.0.3:54442->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8442720486019810598.8470997348590270126. HINFO: read udp 10.244.0.3:41713->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8442720486019810598.8470997348590270126. HINFO: read udp 10.244.0.3:53604->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8442720486019810598.8470997348590270126. HINFO: read udp 10.244.0.3:40448->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8442720486019810598.8470997348590270126. HINFO: read udp 10.244.0.3:52314->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8442720486019810598.8470997348590270126. HINFO: read udp 10.244.0.3:59661->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8442720486019810598.8470997348590270126. HINFO: read udp 10.244.0.3:50648->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8442720486019810598.8470997348590270126. HINFO: read udp 10.244.0.3:41390->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fee7ec259a41] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5716376761701543221.4884017904430996485. HINFO: read udp 10.244.0.3:37177->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5716376761701543221.4884017904430996485. HINFO: read udp 10.244.0.3:54979->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5716376761701543221.4884017904430996485. HINFO: read udp 10.244.0.3:40703->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-728000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-728000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59
	                    minikube.k8s.io/name=running-upgrade-728000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T12_56_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 20:56:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-728000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 21:01:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 20:56:56 +0000   Wed, 04 Dec 2024 20:56:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 20:56:56 +0000   Wed, 04 Dec 2024 20:56:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 20:56:56 +0000   Wed, 04 Dec 2024 20:56:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 20:56:56 +0000   Wed, 04 Dec 2024 20:56:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-728000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 add2058b11c6450c8e66bf9e123a394c
	  System UUID:                add2058b11c6450c8e66bf9e123a394c
	  Boot ID:                    fa374059-9e99-4289-b422-c4eaaae68ab3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-v5tb6                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-zwktn                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-728000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-728000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-728000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-h9z9m                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-728000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m18s  kubelet          Node running-upgrade-728000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s  kubelet          Node running-upgrade-728000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s  kubelet          Node running-upgrade-728000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s  kubelet          Node running-upgrade-728000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-728000 event: Registered Node running-upgrade-728000 in Controller
	
	
	==> dmesg <==
	[  +1.844131] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.079909] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.085502] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.137886] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.090397] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.076754] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.547156] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[ +10.160188] systemd-fstab-generator[1913]: Ignoring "noauto" for root device
	[  +2.494150] systemd-fstab-generator[2196]: Ignoring "noauto" for root device
	[  +0.151245] systemd-fstab-generator[2229]: Ignoring "noauto" for root device
	[  +0.091852] systemd-fstab-generator[2240]: Ignoring "noauto" for root device
	[  +0.091791] systemd-fstab-generator[2253]: Ignoring "noauto" for root device
	[  +2.631218] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.179881] systemd-fstab-generator[2942]: Ignoring "noauto" for root device
	[  +0.069112] systemd-fstab-generator[2953]: Ignoring "noauto" for root device
	[  +0.083934] systemd-fstab-generator[2964]: Ignoring "noauto" for root device
	[  +0.079402] systemd-fstab-generator[2978]: Ignoring "noauto" for root device
	[  +2.278725] systemd-fstab-generator[3131]: Ignoring "noauto" for root device
	[  +3.025767] systemd-fstab-generator[3524]: Ignoring "noauto" for root device
	[  +2.059133] systemd-fstab-generator[3987]: Ignoring "noauto" for root device
	[Dec 4 20:53] kauditd_printk_skb: 68 callbacks suppressed
	[Dec 4 20:56] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.146366] systemd-fstab-generator[11811]: Ignoring "noauto" for root device
	[  +5.652074] systemd-fstab-generator[12429]: Ignoring "noauto" for root device
	[  +0.459649] systemd-fstab-generator[12564]: Ignoring "noauto" for root device
	
	
	==> etcd [110541f8fb04] <==
	{"level":"info","ts":"2024-12-04T20:56:52.483Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-04T20:56:52.483Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-04T20:56:52.483Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-04T20:56:52.483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-12-04T20:56:52.483Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-12-04T20:56:52.483Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-04T20:56:52.483Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-04T20:56:53.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-04T20:56:53.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-04T20:56:53.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-12-04T20:56:53.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-12-04T20:56:53.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-04T20:56:53.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-12-04T20:56:53.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-04T20:56:53.221Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-728000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T20:56:53.221Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T20:56:53.221Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T20:56:53.222Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-12-04T20:56:53.222Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T20:56:53.222Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T20:56:53.222Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-04T20:56:53.223Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T20:56:53.230Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T20:56:53.230Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T20:56:53.230Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 21:01:14 up 9 min,  0 users,  load average: 0.24, 0.25, 0.13
	Linux running-upgrade-728000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [0fde659cfba5] <==
	I1204 20:56:54.434355       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1204 20:56:54.459205       1 cache.go:39] Caches are synced for autoregister controller
	I1204 20:56:54.459895       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1204 20:56:54.461729       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1204 20:56:54.462838       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1204 20:56:54.463110       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1204 20:56:54.463958       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1204 20:56:55.184704       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1204 20:56:55.363844       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1204 20:56:55.366414       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1204 20:56:55.366436       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1204 20:56:55.503550       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1204 20:56:55.513052       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1204 20:56:55.535339       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1204 20:56:55.537826       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1204 20:56:55.538250       1 controller.go:611] quota admission added evaluator for: endpoints
	I1204 20:56:55.539640       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 20:56:56.521341       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1204 20:56:56.786705       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1204 20:56:56.790815       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1204 20:56:56.795713       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1204 20:56:56.841620       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 20:57:10.399447       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1204 20:57:10.446623       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1204 20:57:10.972925       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [3b044967c881] <==
	I1204 20:57:09.690703       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1204 20:57:09.690765       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1204 20:57:09.690949       1 event.go:294] "Event occurred" object="running-upgrade-728000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-728000 event: Registered Node running-upgrade-728000 in Controller"
	I1204 20:57:09.692804       1 shared_informer.go:262] Caches are synced for ephemeral
	I1204 20:57:09.693424       1 shared_informer.go:262] Caches are synced for disruption
	I1204 20:57:09.693458       1 disruption.go:371] Sending events to api server.
	I1204 20:57:09.693480       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1204 20:57:09.693518       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1204 20:57:09.693699       1 shared_informer.go:262] Caches are synced for endpoint
	I1204 20:57:09.693898       1 shared_informer.go:262] Caches are synced for deployment
	I1204 20:57:09.693934       1 shared_informer.go:262] Caches are synced for attach detach
	I1204 20:57:09.694433       1 shared_informer.go:262] Caches are synced for daemon sets
	I1204 20:57:09.695190       1 shared_informer.go:262] Caches are synced for TTL
	I1204 20:57:09.793200       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1204 20:57:09.838025       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1204 20:57:09.844088       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1204 20:57:09.897430       1 shared_informer.go:262] Caches are synced for resource quota
	I1204 20:57:09.903582       1 shared_informer.go:262] Caches are synced for resource quota
	I1204 20:57:10.310006       1 shared_informer.go:262] Caches are synced for garbage collector
	I1204 20:57:10.395727       1 shared_informer.go:262] Caches are synced for garbage collector
	I1204 20:57:10.395739       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1204 20:57:10.400711       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1204 20:57:10.449801       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h9z9m"
	I1204 20:57:10.700587       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zwktn"
	I1204 20:57:10.705805       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-v5tb6"
	
	
	==> kube-proxy [ab92f2224807] <==
	I1204 20:57:10.960560       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1204 20:57:10.960588       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1204 20:57:10.960598       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1204 20:57:10.970865       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1204 20:57:10.970889       1 server_others.go:206] "Using iptables Proxier"
	I1204 20:57:10.970914       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1204 20:57:10.971032       1 server.go:661] "Version info" version="v1.24.1"
	I1204 20:57:10.971038       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 20:57:10.971342       1 config.go:317] "Starting service config controller"
	I1204 20:57:10.971352       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1204 20:57:10.971364       1 config.go:226] "Starting endpoint slice config controller"
	I1204 20:57:10.971367       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1204 20:57:10.971669       1 config.go:444] "Starting node config controller"
	I1204 20:57:10.971672       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1204 20:57:11.072140       1 shared_informer.go:262] Caches are synced for node config
	I1204 20:57:11.072162       1 shared_informer.go:262] Caches are synced for service config
	I1204 20:57:11.072176       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [552fb3b88163] <==
	W1204 20:56:54.440115       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1204 20:56:54.440455       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1204 20:56:54.440200       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1204 20:56:54.440802       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1204 20:56:54.440211       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 20:56:54.440912       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1204 20:56:54.440222       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 20:56:54.440969       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1204 20:56:54.440233       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 20:56:54.441006       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1204 20:56:54.440246       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 20:56:54.441062       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1204 20:56:54.440273       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 20:56:54.441097       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1204 20:56:54.440293       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 20:56:54.441146       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1204 20:56:54.440905       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 20:56:54.441182       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1204 20:56:55.311159       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 20:56:55.311188       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1204 20:56:55.326492       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 20:56:55.326611       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1204 20:56:55.429731       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 20:56:55.429818       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1204 20:56:56.036251       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-12-04 20:52:06 UTC, ends at Wed 2024-12-04 21:01:14 UTC. --
	Dec 04 20:56:58 running-upgrade-728000 kubelet[12435]: E1204 20:56:58.620323   12435 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-728000\" already exists" pod="kube-system/etcd-running-upgrade-728000"
	Dec 04 20:56:58 running-upgrade-728000 kubelet[12435]: E1204 20:56:58.817253   12435 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-728000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-728000"
	Dec 04 20:56:59 running-upgrade-728000 kubelet[12435]: I1204 20:56:59.016638   12435 request.go:601] Waited for 1.117906626s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Dec 04 20:56:59 running-upgrade-728000 kubelet[12435]: E1204 20:56:59.019590   12435 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-728000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-728000"
	Dec 04 20:57:09 running-upgrade-728000 kubelet[12435]: I1204 20:57:09.673515   12435 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 04 20:57:09 running-upgrade-728000 kubelet[12435]: I1204 20:57:09.673836   12435 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 04 20:57:09 running-upgrade-728000 kubelet[12435]: I1204 20:57:09.698444   12435 topology_manager.go:200] "Topology Admit Handler"
	Dec 04 20:57:09 running-upgrade-728000 kubelet[12435]: I1204 20:57:09.775437   12435 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/72ac60c4-aeb8-47af-a2c7-9a7f74ed2edb-tmp\") pod \"storage-provisioner\" (UID: \"72ac60c4-aeb8-47af-a2c7-9a7f74ed2edb\") " pod="kube-system/storage-provisioner"
	Dec 04 20:57:09 running-upgrade-728000 kubelet[12435]: I1204 20:57:09.775470   12435 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578m6\" (UniqueName: \"kubernetes.io/projected/72ac60c4-aeb8-47af-a2c7-9a7f74ed2edb-kube-api-access-578m6\") pod \"storage-provisioner\" (UID: \"72ac60c4-aeb8-47af-a2c7-9a7f74ed2edb\") " pod="kube-system/storage-provisioner"
	Dec 04 20:57:09 running-upgrade-728000 kubelet[12435]: E1204 20:57:09.879552   12435 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 04 20:57:09 running-upgrade-728000 kubelet[12435]: E1204 20:57:09.879570   12435 projected.go:192] Error preparing data for projected volume kube-api-access-578m6 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 04 20:57:09 running-upgrade-728000 kubelet[12435]: E1204 20:57:09.879616   12435 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/72ac60c4-aeb8-47af-a2c7-9a7f74ed2edb-kube-api-access-578m6 podName:72ac60c4-aeb8-47af-a2c7-9a7f74ed2edb nodeName:}" failed. No retries permitted until 2024-12-04 20:57:10.379603443 +0000 UTC m=+13.603890961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-578m6" (UniqueName: "kubernetes.io/projected/72ac60c4-aeb8-47af-a2c7-9a7f74ed2edb-kube-api-access-578m6") pod "storage-provisioner" (UID: "72ac60c4-aeb8-47af-a2c7-9a7f74ed2edb") : configmap "kube-root-ca.crt" not found
	Dec 04 20:57:10 running-upgrade-728000 kubelet[12435]: I1204 20:57:10.451816   12435 topology_manager.go:200] "Topology Admit Handler"
	Dec 04 20:57:10 running-upgrade-728000 kubelet[12435]: I1204 20:57:10.492451   12435 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcn7b\" (UniqueName: \"kubernetes.io/projected/d5298fc5-42ad-4fcd-86c5-851456102905-kube-api-access-vcn7b\") pod \"kube-proxy-h9z9m\" (UID: \"d5298fc5-42ad-4fcd-86c5-851456102905\") " pod="kube-system/kube-proxy-h9z9m"
	Dec 04 20:57:10 running-upgrade-728000 kubelet[12435]: I1204 20:57:10.492563   12435 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5298fc5-42ad-4fcd-86c5-851456102905-lib-modules\") pod \"kube-proxy-h9z9m\" (UID: \"d5298fc5-42ad-4fcd-86c5-851456102905\") " pod="kube-system/kube-proxy-h9z9m"
	Dec 04 20:57:10 running-upgrade-728000 kubelet[12435]: I1204 20:57:10.492616   12435 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5298fc5-42ad-4fcd-86c5-851456102905-kube-proxy\") pod \"kube-proxy-h9z9m\" (UID: \"d5298fc5-42ad-4fcd-86c5-851456102905\") " pod="kube-system/kube-proxy-h9z9m"
	Dec 04 20:57:10 running-upgrade-728000 kubelet[12435]: I1204 20:57:10.492658   12435 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5298fc5-42ad-4fcd-86c5-851456102905-xtables-lock\") pod \"kube-proxy-h9z9m\" (UID: \"d5298fc5-42ad-4fcd-86c5-851456102905\") " pod="kube-system/kube-proxy-h9z9m"
	Dec 04 20:57:10 running-upgrade-728000 kubelet[12435]: I1204 20:57:10.704414   12435 topology_manager.go:200] "Topology Admit Handler"
	Dec 04 20:57:10 running-upgrade-728000 kubelet[12435]: I1204 20:57:10.712341   12435 topology_manager.go:200] "Topology Admit Handler"
	Dec 04 20:57:10 running-upgrade-728000 kubelet[12435]: I1204 20:57:10.794605   12435 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/203cbf26-7f75-407d-9471-b393fa5dc0d3-config-volume\") pod \"coredns-6d4b75cb6d-zwktn\" (UID: \"203cbf26-7f75-407d-9471-b393fa5dc0d3\") " pod="kube-system/coredns-6d4b75cb6d-zwktn"
	Dec 04 20:57:10 running-upgrade-728000 kubelet[12435]: I1204 20:57:10.794641   12435 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfm49\" (UniqueName: \"kubernetes.io/projected/203cbf26-7f75-407d-9471-b393fa5dc0d3-kube-api-access-lfm49\") pod \"coredns-6d4b75cb6d-zwktn\" (UID: \"203cbf26-7f75-407d-9471-b393fa5dc0d3\") " pod="kube-system/coredns-6d4b75cb6d-zwktn"
	Dec 04 20:57:10 running-upgrade-728000 kubelet[12435]: I1204 20:57:10.794673   12435 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnkkt\" (UniqueName: \"kubernetes.io/projected/40db441a-e35c-4b8f-97d5-5d4fad76a4ac-kube-api-access-xnkkt\") pod \"coredns-6d4b75cb6d-v5tb6\" (UID: \"40db441a-e35c-4b8f-97d5-5d4fad76a4ac\") " pod="kube-system/coredns-6d4b75cb6d-v5tb6"
	Dec 04 20:57:10 running-upgrade-728000 kubelet[12435]: I1204 20:57:10.794683   12435 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40db441a-e35c-4b8f-97d5-5d4fad76a4ac-config-volume\") pod \"coredns-6d4b75cb6d-v5tb6\" (UID: \"40db441a-e35c-4b8f-97d5-5d4fad76a4ac\") " pod="kube-system/coredns-6d4b75cb6d-v5tb6"
	Dec 04 21:00:59 running-upgrade-728000 kubelet[12435]: I1204 21:00:59.035793   12435 scope.go:110] "RemoveContainer" containerID="8b498b23d661b9bf40480a60b2ab10a48fd28f4fed0b68aac535596136322b8d"
	Dec 04 21:00:59 running-upgrade-728000 kubelet[12435]: I1204 21:00:59.051085   12435 scope.go:110] "RemoveContainer" containerID="59434a9b24c508ae5d88ed2b407017a9f144571a580f1b1ac21efb904911f1c4"
	
	
	==> storage-provisioner [e9ace0c60701] <==
	I1204 20:57:10.804387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 20:57:10.808745       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 20:57:10.809276       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 20:57:10.814267       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 20:57:10.814414       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-728000_9608378d-c707-4245-a378-956f39c8c176!
	I1204 20:57:10.815321       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88b79abd-85a9-4702-a8f6-5559cc5069de", APIVersion:"v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-728000_9608378d-c707-4245-a378-956f39c8c176 became leader
	I1204 20:57:10.915096       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-728000_9608378d-c707-4245-a378-956f39c8c176!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-728000 -n running-upgrade-728000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-728000 -n running-upgrade-728000: exit status 2 (15.629801125s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-728000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-728000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-728000
--- FAIL: TestRunningBinaryUpgrade (596.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-617000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-617000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.972973542s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-617000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-617000" primary control-plane node in "kubernetes-upgrade-617000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-617000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:54:33.296324    5298 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:54:33.296492    5298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:54:33.296495    5298 out.go:358] Setting ErrFile to fd 2...
	I1204 12:54:33.296498    5298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:54:33.296659    5298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:54:33.297843    5298 out.go:352] Setting JSON to false
	I1204 12:54:33.315851    5298 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5044,"bootTime":1733340629,"procs":579,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:54:33.315935    5298 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:54:33.322486    5298 out.go:177] * [kubernetes-upgrade-617000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:54:33.329719    5298 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:54:33.329768    5298 notify.go:220] Checking for updates...
	I1204 12:54:33.336639    5298 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:54:33.339684    5298 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:54:33.342622    5298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:54:33.345655    5298 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:54:33.348686    5298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:54:33.350431    5298 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:54:33.350500    5298 config.go:182] Loaded profile config "running-upgrade-728000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 12:54:33.350542    5298 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:54:33.354601    5298 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 12:54:33.361470    5298 start.go:297] selected driver: qemu2
	I1204 12:54:33.361477    5298 start.go:901] validating driver "qemu2" against <nil>
	I1204 12:54:33.361505    5298 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:54:33.363865    5298 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 12:54:33.366627    5298 out.go:177] * Automatically selected the socket_vmnet network
	I1204 12:54:33.369749    5298 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 12:54:33.369764    5298 cni.go:84] Creating CNI manager for ""
	I1204 12:54:33.369787    5298 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1204 12:54:33.369818    5298 start.go:340] cluster config:
	{Name:kubernetes-upgrade-617000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:54:33.373973    5298 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:54:33.381679    5298 out.go:177] * Starting "kubernetes-upgrade-617000" primary control-plane node in "kubernetes-upgrade-617000" cluster
	I1204 12:54:33.385707    5298 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 12:54:33.385722    5298 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 12:54:33.385730    5298 cache.go:56] Caching tarball of preloaded images
	I1204 12:54:33.385803    5298 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:54:33.385808    5298 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1204 12:54:33.385872    5298 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/kubernetes-upgrade-617000/config.json ...
	I1204 12:54:33.385883    5298 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/kubernetes-upgrade-617000/config.json: {Name:mkc573f9f34ad106533cd99ebbdd0bfc0548d4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:54:33.386273    5298 start.go:360] acquireMachinesLock for kubernetes-upgrade-617000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:54:33.386315    5298 start.go:364] duration metric: took 36.083µs to acquireMachinesLock for "kubernetes-upgrade-617000"
	I1204 12:54:33.386327    5298 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-617000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:54:33.386351    5298 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:54:33.394663    5298 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 12:54:33.419200    5298 start.go:159] libmachine.API.Create for "kubernetes-upgrade-617000" (driver="qemu2")
	I1204 12:54:33.419228    5298 client.go:168] LocalClient.Create starting
	I1204 12:54:33.419309    5298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:54:33.419351    5298 main.go:141] libmachine: Decoding PEM data...
	I1204 12:54:33.419362    5298 main.go:141] libmachine: Parsing certificate...
	I1204 12:54:33.419404    5298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:54:33.419434    5298 main.go:141] libmachine: Decoding PEM data...
	I1204 12:54:33.419443    5298 main.go:141] libmachine: Parsing certificate...
	I1204 12:54:33.419857    5298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:54:33.585950    5298 main.go:141] libmachine: Creating SSH key...
	I1204 12:54:33.787168    5298 main.go:141] libmachine: Creating Disk image...
	I1204 12:54:33.787178    5298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:54:33.787430    5298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2
	I1204 12:54:33.800049    5298 main.go:141] libmachine: STDOUT: 
	I1204 12:54:33.800072    5298 main.go:141] libmachine: STDERR: 
	I1204 12:54:33.800134    5298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2 +20000M
	I1204 12:54:33.808932    5298 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:54:33.808947    5298 main.go:141] libmachine: STDERR: 
	I1204 12:54:33.808968    5298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2
	I1204 12:54:33.808976    5298 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:54:33.808988    5298 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:54:33.809021    5298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:7a:f1:ed:2a:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2
	I1204 12:54:33.810919    5298 main.go:141] libmachine: STDOUT: 
	I1204 12:54:33.810937    5298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:54:33.810959    5298 client.go:171] duration metric: took 391.718791ms to LocalClient.Create
	I1204 12:54:35.813090    5298 start.go:128] duration metric: took 2.426690875s to createHost
	I1204 12:54:35.813135    5298 start.go:83] releasing machines lock for "kubernetes-upgrade-617000", held for 2.4267855s
	W1204 12:54:35.813173    5298 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:54:35.822035    5298 out.go:177] * Deleting "kubernetes-upgrade-617000" in qemu2 ...
	W1204 12:54:35.842938    5298 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:54:35.842950    5298 start.go:729] Will try again in 5 seconds ...
	I1204 12:54:40.845320    5298 start.go:360] acquireMachinesLock for kubernetes-upgrade-617000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:54:40.846004    5298 start.go:364] duration metric: took 540.334µs to acquireMachinesLock for "kubernetes-upgrade-617000"
	I1204 12:54:40.846167    5298 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-617000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 12:54:40.846416    5298 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 12:54:40.853129    5298 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 12:54:40.901448    5298 start.go:159] libmachine.API.Create for "kubernetes-upgrade-617000" (driver="qemu2")
	I1204 12:54:40.901507    5298 client.go:168] LocalClient.Create starting
	I1204 12:54:40.901649    5298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 12:54:40.901737    5298 main.go:141] libmachine: Decoding PEM data...
	I1204 12:54:40.901753    5298 main.go:141] libmachine: Parsing certificate...
	I1204 12:54:40.901817    5298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 12:54:40.901878    5298 main.go:141] libmachine: Decoding PEM data...
	I1204 12:54:40.901890    5298 main.go:141] libmachine: Parsing certificate...
	I1204 12:54:40.902622    5298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 12:54:41.071431    5298 main.go:141] libmachine: Creating SSH key...
	I1204 12:54:41.178736    5298 main.go:141] libmachine: Creating Disk image...
	I1204 12:54:41.178748    5298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 12:54:41.178991    5298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2
	I1204 12:54:41.189339    5298 main.go:141] libmachine: STDOUT: 
	I1204 12:54:41.189364    5298 main.go:141] libmachine: STDERR: 
	I1204 12:54:41.189418    5298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2 +20000M
	I1204 12:54:41.198268    5298 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 12:54:41.198284    5298 main.go:141] libmachine: STDERR: 
	I1204 12:54:41.198305    5298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2
	I1204 12:54:41.198310    5298 main.go:141] libmachine: Starting QEMU VM...
	I1204 12:54:41.198318    5298 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:54:41.198351    5298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:53:c2:ac:ad:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2
	I1204 12:54:41.200182    5298 main.go:141] libmachine: STDOUT: 
	I1204 12:54:41.200198    5298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:54:41.200214    5298 client.go:171] duration metric: took 298.696125ms to LocalClient.Create
	I1204 12:54:43.202265    5298 start.go:128] duration metric: took 2.355773875s to createHost
	I1204 12:54:43.202280    5298 start.go:83] releasing machines lock for "kubernetes-upgrade-617000", held for 2.356216875s
	W1204 12:54:43.202371    5298 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-617000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-617000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:54:43.214644    5298 out.go:201] 
	W1204 12:54:43.217660    5298 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:54:43.217666    5298 out.go:270] * 
	* 
	W1204 12:54:43.218110    5298 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:54:43.224479    5298 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-617000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-617000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-617000: (2.992509625s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-617000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-617000 status --format={{.Host}}: exit status 7 (67.947208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-617000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-617000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.180566791s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-617000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-617000" primary control-plane node in "kubernetes-upgrade-617000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-617000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-617000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:54:46.330967    5337 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:54:46.331133    5337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:54:46.331137    5337 out.go:358] Setting ErrFile to fd 2...
	I1204 12:54:46.331140    5337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:54:46.331278    5337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:54:46.332383    5337 out.go:352] Setting JSON to false
	I1204 12:54:46.351737    5337 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5057,"bootTime":1733340629,"procs":577,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:54:46.351817    5337 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:54:46.356043    5337 out.go:177] * [kubernetes-upgrade-617000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:54:46.363118    5337 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:54:46.363205    5337 notify.go:220] Checking for updates...
	I1204 12:54:46.370998    5337 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:54:46.374069    5337 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:54:46.378030    5337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:54:46.381054    5337 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:54:46.384123    5337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:54:46.387265    5337 config.go:182] Loaded profile config "kubernetes-upgrade-617000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1204 12:54:46.387531    5337 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:54:46.391048    5337 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 12:54:46.398010    5337 start.go:297] selected driver: qemu2
	I1204 12:54:46.398018    5337 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-617000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:54:46.398081    5337 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:54:46.400883    5337 cni.go:84] Creating CNI manager for ""
	I1204 12:54:46.400917    5337 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:54:46.400937    5337 start.go:340] cluster config:
	{Name:kubernetes-upgrade-617000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-617000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:54:46.405264    5337 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:54:46.412041    5337 out.go:177] * Starting "kubernetes-upgrade-617000" primary control-plane node in "kubernetes-upgrade-617000" cluster
	I1204 12:54:46.415966    5337 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 12:54:46.415981    5337 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 12:54:46.415988    5337 cache.go:56] Caching tarball of preloaded images
	I1204 12:54:46.416059    5337 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:54:46.416065    5337 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 12:54:46.416115    5337 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/kubernetes-upgrade-617000/config.json ...
	I1204 12:54:46.416544    5337 start.go:360] acquireMachinesLock for kubernetes-upgrade-617000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:54:46.416579    5337 start.go:364] duration metric: took 25.666µs to acquireMachinesLock for "kubernetes-upgrade-617000"
	I1204 12:54:46.416590    5337 start.go:96] Skipping create...Using existing machine configuration
	I1204 12:54:46.416594    5337 fix.go:54] fixHost starting: 
	I1204 12:54:46.416706    5337 fix.go:112] recreateIfNeeded on kubernetes-upgrade-617000: state=Stopped err=<nil>
	W1204 12:54:46.416713    5337 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 12:54:46.424039    5337 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-617000" ...
	I1204 12:54:46.428028    5337 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:54:46.428061    5337 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:53:c2:ac:ad:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2
	I1204 12:54:46.430085    5337 main.go:141] libmachine: STDOUT: 
	I1204 12:54:46.430101    5337 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:54:46.430126    5337 fix.go:56] duration metric: took 13.531166ms for fixHost
	I1204 12:54:46.430130    5337 start.go:83] releasing machines lock for "kubernetes-upgrade-617000", held for 13.545791ms
	W1204 12:54:46.430136    5337 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:54:46.430175    5337 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:54:46.430180    5337 start.go:729] Will try again in 5 seconds ...
	I1204 12:54:51.432322    5337 start.go:360] acquireMachinesLock for kubernetes-upgrade-617000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:54:51.432441    5337 start.go:364] duration metric: took 96.25µs to acquireMachinesLock for "kubernetes-upgrade-617000"
	I1204 12:54:51.432474    5337 start.go:96] Skipping create...Using existing machine configuration
	I1204 12:54:51.432478    5337 fix.go:54] fixHost starting: 
	I1204 12:54:51.432626    5337 fix.go:112] recreateIfNeeded on kubernetes-upgrade-617000: state=Stopped err=<nil>
	W1204 12:54:51.432631    5337 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 12:54:51.436771    5337 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-617000" ...
	I1204 12:54:51.444663    5337 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:54:51.444723    5337 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:53:c2:ac:ad:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubernetes-upgrade-617000/disk.qcow2
	I1204 12:54:51.446967    5337 main.go:141] libmachine: STDOUT: 
	I1204 12:54:51.446981    5337 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 12:54:51.447000    5337 fix.go:56] duration metric: took 14.5215ms for fixHost
	I1204 12:54:51.447005    5337 start.go:83] releasing machines lock for "kubernetes-upgrade-617000", held for 14.557166ms
	W1204 12:54:51.447048    5337 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-617000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-617000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 12:54:51.454769    5337 out.go:201] 
	W1204 12:54:51.458760    5337 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 12:54:51.458767    5337 out.go:270] * 
	* 
	W1204 12:54:51.459242    5337 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 12:54:51.469757    5337 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-617000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-617000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-617000 version --output=json: exit status 1 (28.572333ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-617000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-12-04 12:54:51.507852 -0800 PST m=+3781.715227751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-617000 -n kubernetes-upgrade-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-617000 -n kubernetes-upgrade-617000: exit status 7 (33.544542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-617000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-617000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-617000
--- FAIL: TestKubernetesUpgrade (18.36s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.09s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19985
- KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1052259774/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.09s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.02s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19985
- KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current978995408/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.766620088 start -p stopped-upgrade-827000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.766620088 start -p stopped-upgrade-827000 --memory=2200 --vm-driver=qemu2 : (41.133945416s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.766620088 -p stopped-upgrade-827000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.766620088 -p stopped-upgrade-827000 stop: (12.1180325s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-827000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1204 12:55:47.704430    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:57:03.732968    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:57:20.631587    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
E1204 13:00:47.709416    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-827000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.590734208s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-827000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-827000" primary control-plane node in "stopped-upgrade-827000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-827000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:55:46.147159    5382 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:55:46.147320    5382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:55:46.147324    5382 out.go:358] Setting ErrFile to fd 2...
	I1204 12:55:46.147327    5382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:55:46.147502    5382 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:55:46.148665    5382 out.go:352] Setting JSON to false
	I1204 12:55:46.170185    5382 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5117,"bootTime":1733340629,"procs":578,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:55:46.170266    5382 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:55:46.174058    5382 out.go:177] * [stopped-upgrade-827000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:55:46.182003    5382 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:55:46.182026    5382 notify.go:220] Checking for updates...
	I1204 12:55:46.188891    5382 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:55:46.192890    5382 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:55:46.196790    5382 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:55:46.199985    5382 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:55:46.202992    5382 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:55:46.206235    5382 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 12:55:46.209952    5382 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1204 12:55:46.212937    5382 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:55:46.215991    5382 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 12:55:46.222897    5382 start.go:297] selected driver: qemu2
	I1204 12:55:46.222903    5382 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63857 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 12:55:46.222951    5382 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:55:46.226009    5382 cni.go:84] Creating CNI manager for ""
	I1204 12:55:46.226043    5382 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:55:46.226074    5382 start.go:340] cluster config:
	{Name:stopped-upgrade-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63857 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 12:55:46.226134    5382 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 12:55:46.234991    5382 out.go:177] * Starting "stopped-upgrade-827000" primary control-plane node in "stopped-upgrade-827000" cluster
	I1204 12:55:46.237909    5382 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 12:55:46.237925    5382 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1204 12:55:46.237935    5382 cache.go:56] Caching tarball of preloaded images
	I1204 12:55:46.238011    5382 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 12:55:46.238022    5382 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1204 12:55:46.238071    5382 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/config.json ...
	I1204 12:55:46.238686    5382 start.go:360] acquireMachinesLock for stopped-upgrade-827000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 12:55:46.238739    5382 start.go:364] duration metric: took 40.625µs to acquireMachinesLock for "stopped-upgrade-827000"
	I1204 12:55:46.238748    5382 start.go:96] Skipping create...Using existing machine configuration
	I1204 12:55:46.238752    5382 fix.go:54] fixHost starting: 
	I1204 12:55:46.238874    5382 fix.go:112] recreateIfNeeded on stopped-upgrade-827000: state=Stopped err=<nil>
	W1204 12:55:46.238882    5382 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 12:55:46.244890    5382 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-827000" ...
	I1204 12:55:46.248931    5382 qemu.go:418] Using hvf for hardware acceleration
	I1204 12:55:46.249029    5382 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/qemu.pid -nic user,model=virtio,hostfwd=tcp::63822-:22,hostfwd=tcp::63823-:2376,hostname=stopped-upgrade-827000 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/disk.qcow2
	I1204 12:55:46.296885    5382 main.go:141] libmachine: STDOUT: 
	I1204 12:55:46.296913    5382 main.go:141] libmachine: STDERR: 
	I1204 12:55:46.296923    5382 main.go:141] libmachine: Waiting for VM to start (ssh -p 63822 docker@127.0.0.1)...
	I1204 12:56:05.394323    5382 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/config.json ...
	I1204 12:56:05.394709    5382 machine.go:93] provisionDockerMachine start ...
	I1204 12:56:05.394805    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.395061    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.395068    5382 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 12:56:05.463797    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 12:56:05.463814    5382 buildroot.go:166] provisioning hostname "stopped-upgrade-827000"
	I1204 12:56:05.463888    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.464002    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.464009    5382 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-827000 && echo "stopped-upgrade-827000" | sudo tee /etc/hostname
	I1204 12:56:05.532721    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-827000
	
	I1204 12:56:05.532778    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.532884    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.532892    5382 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-827000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-827000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-827000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 12:56:05.603121    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 12:56:05.603134    5382 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19985-1334/.minikube CaCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19985-1334/.minikube}
	I1204 12:56:05.603150    5382 buildroot.go:174] setting up certificates
	I1204 12:56:05.603154    5382 provision.go:84] configureAuth start
	I1204 12:56:05.603161    5382 provision.go:143] copyHostCerts
	I1204 12:56:05.603239    5382 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem, removing ...
	I1204 12:56:05.603248    5382 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem
	I1204 12:56:05.603349    5382 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/cert.pem (1123 bytes)
	I1204 12:56:05.603534    5382 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem, removing ...
	I1204 12:56:05.603539    5382 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem
	I1204 12:56:05.603594    5382 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/key.pem (1679 bytes)
	I1204 12:56:05.603708    5382 exec_runner.go:144] found /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem, removing ...
	I1204 12:56:05.603714    5382 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem
	I1204 12:56:05.603792    5382 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.pem (1082 bytes)
	I1204 12:56:05.603885    5382 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-827000 san=[127.0.0.1 localhost minikube stopped-upgrade-827000]
	I1204 12:56:05.772546    5382 provision.go:177] copyRemoteCerts
	I1204 12:56:05.772615    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 12:56:05.772627    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	I1204 12:56:05.809385    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 12:56:05.816568    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1204 12:56:05.823297    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1204 12:56:05.830215    5382 provision.go:87] duration metric: took 227.049292ms to configureAuth
	I1204 12:56:05.830224    5382 buildroot.go:189] setting minikube options for container-runtime
	I1204 12:56:05.830328    5382 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 12:56:05.830384    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.830476    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.830482    5382 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 12:56:05.899607    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 12:56:05.899616    5382 buildroot.go:70] root file system type: tmpfs
	I1204 12:56:05.899670    5382 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 12:56:05.899730    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.899847    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.899881    5382 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 12:56:05.968168    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 12:56:05.968242    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:05.968352    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:05.968362    5382 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 12:56:06.349184    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 12:56:06.349199    5382 machine.go:96] duration metric: took 954.472417ms to provisionDockerMachine
	I1204 12:56:06.349206    5382 start.go:293] postStartSetup for "stopped-upgrade-827000" (driver="qemu2")
	I1204 12:56:06.349213    5382 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 12:56:06.349281    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 12:56:06.349291    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	I1204 12:56:06.385410    5382 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 12:56:06.386817    5382 info.go:137] Remote host: Buildroot 2021.02.12
	I1204 12:56:06.386825    5382 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19985-1334/.minikube/addons for local assets ...
	I1204 12:56:06.386918    5382 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19985-1334/.minikube/files for local assets ...
	I1204 12:56:06.387063    5382 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem -> 18562.pem in /etc/ssl/certs
	I1204 12:56:06.387228    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 12:56:06.390134    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem --> /etc/ssl/certs/18562.pem (1708 bytes)
	I1204 12:56:06.397344    5382 start.go:296] duration metric: took 48.1325ms for postStartSetup
	I1204 12:56:06.397359    5382 fix.go:56] duration metric: took 20.158358875s for fixHost
	I1204 12:56:06.397403    5382 main.go:141] libmachine: Using SSH client type: native
	I1204 12:56:06.397510    5382 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102acefc0] 0x102ad1800 <nil>  [] 0s} localhost 63822 <nil> <nil>}
	I1204 12:56:06.397518    5382 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 12:56:06.463388    5382 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733345766.381559087
	
	I1204 12:56:06.463403    5382 fix.go:216] guest clock: 1733345766.381559087
	I1204 12:56:06.463407    5382 fix.go:229] Guest: 2024-12-04 12:56:06.381559087 -0800 PST Remote: 2024-12-04 12:56:06.39736 -0800 PST m=+20.282119751 (delta=-15.800913ms)
	I1204 12:56:06.463417    5382 fix.go:200] guest clock delta is within tolerance: -15.800913ms
	I1204 12:56:06.463420    5382 start.go:83] releasing machines lock for "stopped-upgrade-827000", held for 20.22442825s
	I1204 12:56:06.463506    5382 ssh_runner.go:195] Run: cat /version.json
	I1204 12:56:06.463517    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	I1204 12:56:06.463507    5382 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 12:56:06.463546    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	W1204 12:56:06.464274    5382 sshutil.go:64] dial failure (will retry): dial tcp [::1]:63822: connect: connection refused
	I1204 12:56:06.464300    5382 retry.go:31] will retry after 331.644316ms: dial tcp [::1]:63822: connect: connection refused
	W1204 12:56:06.859274    5382 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1204 12:56:06.859479    5382 ssh_runner.go:195] Run: systemctl --version
	I1204 12:56:06.864068    5382 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 12:56:06.867449    5382 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 12:56:06.867533    5382 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1204 12:56:06.874264    5382 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1204 12:56:06.883205    5382 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 12:56:06.883226    5382 start.go:495] detecting cgroup driver to use...
	I1204 12:56:06.883348    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 12:56:06.894750    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1204 12:56:06.899287    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 12:56:06.903094    5382 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 12:56:06.903133    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 12:56:06.906938    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 12:56:06.910686    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 12:56:06.914337    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 12:56:06.917967    5382 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 12:56:06.921394    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 12:56:06.924533    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 12:56:06.927463    5382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 12:56:06.930691    5382 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 12:56:06.933981    5382 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 12:56:06.937565    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:07.002203    5382 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 12:56:07.008547    5382 start.go:495] detecting cgroup driver to use...
	I1204 12:56:07.008622    5382 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 12:56:07.014175    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 12:56:07.019310    5382 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 12:56:07.027024    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 12:56:07.032280    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 12:56:07.036559    5382 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 12:56:07.058395    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 12:56:07.063368    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 12:56:07.068614    5382 ssh_runner.go:195] Run: which cri-dockerd
	I1204 12:56:07.069838    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 12:56:07.072518    5382 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1204 12:56:07.077463    5382 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 12:56:07.166285    5382 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 12:56:07.254266    5382 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 12:56:07.254326    5382 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 12:56:07.260004    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:07.316359    5382 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 12:56:08.466825    5382 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.150432958s)
	I1204 12:56:08.466900    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 12:56:08.471321    5382 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 12:56:08.477833    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 12:56:08.483154    5382 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 12:56:08.563153    5382 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 12:56:08.626965    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:08.705935    5382 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 12:56:08.712126    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 12:56:08.716552    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:08.795409    5382 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 12:56:08.836194    5382 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 12:56:08.836293    5382 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 12:56:08.839996    5382 start.go:563] Will wait 60s for crictl version
	I1204 12:56:08.840059    5382 ssh_runner.go:195] Run: which crictl
	I1204 12:56:08.841347    5382 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 12:56:08.857075    5382 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1204 12:56:08.857158    5382 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 12:56:08.874297    5382 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 12:56:08.894427    5382 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1204 12:56:08.894593    5382 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1204 12:56:08.895879    5382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 12:56:08.899407    5382 kubeadm.go:883] updating cluster {Name:stopped-upgrade-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63857 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1204 12:56:08.899449    5382 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 12:56:08.899499    5382 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 12:56:08.909561    5382 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 12:56:08.909569    5382 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 12:56:08.909631    5382 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 12:56:08.913423    5382 ssh_runner.go:195] Run: which lz4
	I1204 12:56:08.914873    5382 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 12:56:08.916196    5382 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 12:56:08.916207    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1204 12:56:09.866144    5382 docker.go:653] duration metric: took 951.297584ms to copy over tarball
	I1204 12:56:09.866225    5382 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 12:56:11.070090    5382 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.203829666s)
	I1204 12:56:11.070112    5382 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 12:56:11.086536    5382 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 12:56:11.090263    5382 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1204 12:56:11.095174    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:11.175185    5382 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 12:56:12.610954    5382 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.435709167s)
	I1204 12:56:12.611079    5382 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 12:56:12.625590    5382 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 12:56:12.625601    5382 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 12:56:12.625607    5382 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 12:56:12.629884    5382 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:12.631612    5382 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:56:12.633716    5382 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1204 12:56:12.633871    5382 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:12.636171    5382 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:56:12.636344    5382 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:56:12.638101    5382 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1204 12:56:12.638175    5382 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:56:12.639367    5382 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:56:12.639550    5382 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:56:12.640789    5382 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:56:12.641398    5382 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:56:12.642213    5382 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:56:12.642297    5382 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1204 12:56:12.643360    5382 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:56:12.644315    5382 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	W1204 12:56:13.256819    5382 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1204 12:56:13.256971    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:56:13.264758    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1204 12:56:13.271451    5382 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1204 12:56:13.271485    5382 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:56:13.271562    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1204 12:56:13.273031    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:56:13.280470    5382 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1204 12:56:13.280497    5382 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1204 12:56:13.280568    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1204 12:56:13.292280    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1204 12:56:13.292306    5382 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1204 12:56:13.292337    5382 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:56:13.292388    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1204 12:56:13.292428    5382 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1204 12:56:13.299049    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1204 12:56:13.299189    5382 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1204 12:56:13.306500    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:56:13.308192    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1204 12:56:13.308236    5382 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1204 12:56:13.308250    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1204 12:56:13.308258    5382 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1204 12:56:13.308268    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1204 12:56:13.333760    5382 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1204 12:56:13.333785    5382 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:56:13.333853    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1204 12:56:13.338184    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:56:13.349723    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1204 12:56:13.387402    5382 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1204 12:56:13.387424    5382 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:56:13.387498    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1204 12:56:13.410646    5382 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1204 12:56:13.410663    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1204 12:56:13.449753    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1204 12:56:13.470007    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:56:13.485472    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1204 12:56:13.520443    5382 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1204 12:56:13.520451    5382 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1204 12:56:13.520477    5382 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:56:13.520543    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 12:56:13.551240    5382 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1204 12:56:13.551262    5382 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1204 12:56:13.551269    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1204 12:56:13.551327    5382 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W1204 12:56:13.565207    5382 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1204 12:56:13.565337    5382 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:13.594503    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1204 12:56:13.594655    5382 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1204 12:56:13.596393    5382 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1204 12:56:13.596418    5382 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:13.596477    5382 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 12:56:13.611929    5382 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1204 12:56:13.611967    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1204 12:56:13.638401    5382 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 12:56:13.638545    5382 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 12:56:13.649959    5382 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1204 12:56:13.649984    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1204 12:56:13.651634    5382 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1204 12:56:13.651662    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1204 12:56:13.716737    5382 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1204 12:56:13.716760    5382 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1204 12:56:13.716768    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1204 12:56:13.848026    5382 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1204 12:56:13.848048    5382 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 12:56:13.848056    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1204 12:56:14.084155    5382 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 12:56:14.084197    5382 cache_images.go:92] duration metric: took 1.458565209s to LoadCachedImages
	W1204 12:56:14.084238    5382 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1204 12:56:14.084243    5382 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1204 12:56:14.084290    5382 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-827000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 12:56:14.084362    5382 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1204 12:56:14.098149    5382 cni.go:84] Creating CNI manager for ""
	I1204 12:56:14.098161    5382 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 12:56:14.098170    5382 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 12:56:14.098182    5382 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-827000 NodeName:stopped-upgrade-827000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 12:56:14.098253    5382 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-827000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 12:56:14.098321    5382 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1204 12:56:14.101625    5382 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 12:56:14.101665    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 12:56:14.104403    5382 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1204 12:56:14.109263    5382 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 12:56:14.114254    5382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1204 12:56:14.119946    5382 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1204 12:56:14.121073    5382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 12:56:14.124484    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 12:56:14.209855    5382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 12:56:14.221066    5382 certs.go:68] Setting up /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000 for IP: 10.0.2.15
	I1204 12:56:14.221077    5382 certs.go:194] generating shared ca certs ...
	I1204 12:56:14.221085    5382 certs.go:226] acquiring lock for ca certs: {Name:mk686f72a960a82dacaf4c130e092ac78361d077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:14.221273    5382 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.key
	I1204 12:56:14.221552    5382 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.key
	I1204 12:56:14.221559    5382 certs.go:256] generating profile certs ...
	I1204 12:56:14.221805    5382 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/client.key
	I1204 12:56:14.221821    5382 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key.fdd81b32
	I1204 12:56:14.221830    5382 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt.fdd81b32 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1204 12:56:14.384596    5382 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt.fdd81b32 ...
	I1204 12:56:14.384610    5382 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt.fdd81b32: {Name:mkb02dddabe8308f2532bcf99f1dd0c86932dd1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:14.384947    5382 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key.fdd81b32 ...
	I1204 12:56:14.384952    5382 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key.fdd81b32: {Name:mk59f57f124d79495880c414dd717ad3ede2f670 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:14.385123    5382 certs.go:381] copying /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt.fdd81b32 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt
	I1204 12:56:14.385267    5382 certs.go:385] copying /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key.fdd81b32 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key
	I1204 12:56:14.385661    5382 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/proxy-client.key
	I1204 12:56:14.385868    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856.pem (1338 bytes)
	W1204 12:56:14.386063    5382 certs.go:480] ignoring /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856_empty.pem, impossibly tiny 0 bytes
	I1204 12:56:14.386074    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca-key.pem (1679 bytes)
	I1204 12:56:14.386104    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem (1082 bytes)
	I1204 12:56:14.386126    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem (1123 bytes)
	I1204 12:56:14.386151    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/key.pem (1679 bytes)
	I1204 12:56:14.386201    5382 certs.go:484] found cert: /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem (1708 bytes)
	I1204 12:56:14.386553    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 12:56:14.393574    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1204 12:56:14.400101    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 12:56:14.407199    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 12:56:14.414647    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 12:56:14.422210    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 12:56:14.429303    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 12:56:14.436685    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1204 12:56:14.443440    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 12:56:14.450269    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/1856.pem --> /usr/share/ca-certificates/1856.pem (1338 bytes)
	I1204 12:56:14.457763    5382 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/ssl/certs/18562.pem --> /usr/share/ca-certificates/18562.pem (1708 bytes)
	I1204 12:56:14.464510    5382 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 12:56:14.469607    5382 ssh_runner.go:195] Run: openssl version
	I1204 12:56:14.471497    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 12:56:14.474543    5382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:56:14.475962    5382 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 19:52 /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:56:14.475992    5382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 12:56:14.477746    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 12:56:14.480886    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1856.pem && ln -fs /usr/share/ca-certificates/1856.pem /etc/ssl/certs/1856.pem"
	I1204 12:56:14.483860    5382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1856.pem
	I1204 12:56:14.485168    5382 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 20:00 /usr/share/ca-certificates/1856.pem
	I1204 12:56:14.485200    5382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1856.pem
	I1204 12:56:14.486904    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1856.pem /etc/ssl/certs/51391683.0"
	I1204 12:56:14.490311    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18562.pem && ln -fs /usr/share/ca-certificates/18562.pem /etc/ssl/certs/18562.pem"
	I1204 12:56:14.493667    5382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18562.pem
	I1204 12:56:14.495054    5382 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 20:00 /usr/share/ca-certificates/18562.pem
	I1204 12:56:14.495095    5382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18562.pem
	I1204 12:56:14.497197    5382 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18562.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 12:56:14.500146    5382 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 12:56:14.501478    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 12:56:14.503642    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 12:56:14.505457    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 12:56:14.508024    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 12:56:14.509777    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 12:56:14.511536    5382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 12:56:14.513374    5382 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-827000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:63857 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-827000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 12:56:14.513445    5382 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 12:56:14.523829    5382 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 12:56:14.527015    5382 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 12:56:14.527024    5382 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 12:56:14.527058    5382 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 12:56:14.529968    5382 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 12:56:14.530270    5382 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-827000" does not appear in /Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:56:14.530368    5382 kubeconfig.go:62] /Users/jenkins/minikube-integration/19985-1334/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-827000" cluster setting kubeconfig missing "stopped-upgrade-827000" context setting]
	I1204 12:56:14.530570    5382 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/kubeconfig: {Name:mk18d42ed20876d07306ef2e0f2006c5dc1a1320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 12:56:14.531016    5382 kapi.go:59] client config for stopped-upgrade-827000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/client.key", CAFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10452b740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 12:56:14.531525    5382 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 12:56:14.534306    5382 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-827000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1204 12:56:14.534311    5382 kubeadm.go:1160] stopping kube-system containers ...
	I1204 12:56:14.534360    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 12:56:14.544904    5382 docker.go:483] Stopping containers: [7b2edfde1470 62e56b454444 7a4a4f7d1323 01a8a4e18f3f 3d1d1ce7fee7 67a82ac16594 3d3df5af7004 58290a52fcff]
	I1204 12:56:14.544973    5382 ssh_runner.go:195] Run: docker stop 7b2edfde1470 62e56b454444 7a4a4f7d1323 01a8a4e18f3f 3d1d1ce7fee7 67a82ac16594 3d3df5af7004 58290a52fcff
	I1204 12:56:14.555330    5382 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 12:56:14.561217    5382 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 12:56:14.563988    5382 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 12:56:14.563997    5382 kubeadm.go:157] found existing configuration files:
	
	I1204 12:56:14.564027    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/admin.conf
	I1204 12:56:14.566545    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 12:56:14.566575    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 12:56:14.569567    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/kubelet.conf
	I1204 12:56:14.572242    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 12:56:14.572268    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 12:56:14.574775    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/controller-manager.conf
	I1204 12:56:14.577870    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 12:56:14.577896    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 12:56:14.580829    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/scheduler.conf
	I1204 12:56:14.583426    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 12:56:14.583470    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 12:56:14.586559    5382 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 12:56:14.589899    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:56:14.614414    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:56:15.093500    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:56:15.218791    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:56:15.242966    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 12:56:15.263080    5382 api_server.go:52] waiting for apiserver process to appear ...
	I1204 12:56:15.263168    5382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:56:15.765197    5382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:56:16.265228    5382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 12:56:16.269621    5382 api_server.go:72] duration metric: took 1.006529625s to wait for apiserver process to appear ...
	I1204 12:56:16.269631    5382 api_server.go:88] waiting for apiserver healthz status ...
	I1204 12:56:16.269645    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:21.271798    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:21.271833    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:26.272191    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:26.272221    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:31.272660    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:31.272684    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:36.273279    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:36.273375    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:41.274445    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:41.274479    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:46.275539    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:46.275580    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:51.277242    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:51.277265    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:56:56.278896    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:56:56.278980    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:01.281407    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:01.281430    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:06.283408    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:06.283466    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:11.284140    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:11.284160    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:16.286410    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:16.286646    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:16.303009    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:16.303103    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:16.316291    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:16.316374    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:16.327446    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:16.327526    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:16.338476    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:16.338562    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:16.349319    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:16.349395    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:16.359896    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:16.359982    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:16.373286    5382 logs.go:282] 0 containers: []
	W1204 12:57:16.373298    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:16.373361    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:16.383153    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:16.383171    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:16.383176    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:16.394193    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:16.394206    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:16.435862    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:16.435874    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:16.451391    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:16.451400    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:16.465288    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:16.465297    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:16.480609    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:16.480624    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:16.498090    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:16.498099    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:16.513213    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:16.513228    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:16.524787    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:16.524799    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:16.540932    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:16.540947    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:16.545161    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:16.545167    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:16.556607    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:16.556627    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:16.570185    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:16.570195    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:16.581992    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:16.582002    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:16.595227    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:16.595237    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:16.620337    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:16.620346    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:16.656458    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:16.656464    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:19.267631    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:24.269958    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:24.270122    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:24.287182    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:24.287265    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:24.299033    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:24.299121    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:24.309659    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:24.309731    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:24.320147    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:24.320223    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:24.330434    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:24.330499    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:24.342062    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:24.342142    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:24.352046    5382 logs.go:282] 0 containers: []
	W1204 12:57:24.352057    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:24.352119    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:24.362496    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:24.362513    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:24.362518    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:24.381400    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:24.381412    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:24.392945    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:24.392955    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:24.417965    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:24.417972    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:24.431506    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:24.431518    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:24.446264    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:24.446277    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:24.458159    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:24.458173    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:24.463067    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:24.463074    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:24.476937    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:24.476952    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:24.518323    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:24.518333    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:24.532719    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:24.532730    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:24.548806    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:24.548821    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:24.586641    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:24.586652    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:24.630817    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:24.630827    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:24.647189    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:24.647201    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:24.658651    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:24.658666    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:24.669736    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:24.669746    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:27.183377    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:32.185742    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:32.185954    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:32.206594    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:32.206702    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:32.221515    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:32.221599    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:32.233660    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:32.233741    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:32.244788    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:32.244868    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:32.255105    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:32.255180    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:32.265154    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:32.265221    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:32.275510    5382 logs.go:282] 0 containers: []
	W1204 12:57:32.275526    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:32.275589    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:32.288558    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:32.288577    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:32.288582    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:32.302217    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:32.302228    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:32.319366    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:32.319376    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:32.331178    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:32.331190    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:32.343086    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:32.343099    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:32.379443    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:32.379453    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:32.420078    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:32.420091    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:32.435020    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:32.435032    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:32.446764    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:32.446776    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:32.458631    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:32.458644    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:32.472989    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:32.473000    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:32.484423    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:32.484433    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:32.522169    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:32.522181    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:32.533774    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:32.533786    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:32.548876    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:32.548887    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:32.552989    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:32.552997    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:32.571727    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:32.571739    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:35.097161    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:40.099426    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:40.099582    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:40.115030    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:40.115123    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:40.126897    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:40.126970    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:40.137516    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:40.137593    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:40.150084    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:40.150170    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:40.160743    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:40.160821    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:40.171055    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:40.171122    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:40.181362    5382 logs.go:282] 0 containers: []
	W1204 12:57:40.181374    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:40.181433    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:40.192077    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:40.192093    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:40.192098    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:40.203651    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:40.203663    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:40.207922    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:40.207928    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:40.245063    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:40.245075    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:40.258937    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:40.258946    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:40.280689    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:40.280707    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:40.292643    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:40.292658    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:40.308912    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:40.308923    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:40.347844    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:40.347866    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:40.364544    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:40.364555    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:40.375967    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:40.375976    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:40.414958    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:40.414972    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:40.428347    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:40.428357    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:40.446055    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:40.446064    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:40.472083    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:40.472090    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:40.486737    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:40.486747    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:40.498256    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:40.498267    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:43.018995    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:48.021435    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:48.021641    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:48.037971    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:48.038066    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:48.051228    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:48.051309    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:48.062102    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:48.062183    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:48.077966    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:48.078053    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:48.088885    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:48.088966    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:48.099260    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:48.099341    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:48.109516    5382 logs.go:282] 0 containers: []
	W1204 12:57:48.109530    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:48.109594    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:48.119752    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:48.119768    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:48.119773    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:48.131577    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:48.131588    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:48.145579    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:48.145592    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:48.158331    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:48.158341    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:48.170706    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:48.170718    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:48.181896    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:48.181906    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:48.196692    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:48.196707    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:48.220018    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:48.220028    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:48.223927    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:48.223936    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:48.241613    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:48.241625    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:48.283742    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:48.283752    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:48.322246    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:48.322257    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:48.342168    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:48.342179    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:48.356485    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:48.356494    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:48.368754    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:48.368769    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:48.402928    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:48.402937    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:48.418147    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:48.418159    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:50.938435    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:57:55.940931    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:57:55.941390    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:57:55.971330    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:57:55.971484    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:57:55.989325    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:57:55.989444    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:57:56.003213    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:57:56.003295    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:57:56.015182    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:57:56.015265    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:57:56.025697    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:57:56.025775    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:57:56.036646    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:57:56.036727    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:57:56.047478    5382 logs.go:282] 0 containers: []
	W1204 12:57:56.047489    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:57:56.047557    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:57:56.058449    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:57:56.058468    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:57:56.058475    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:57:56.070360    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:57:56.070374    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:57:56.086586    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:57:56.086598    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:57:56.098293    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:57:56.098306    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:57:56.122741    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:57:56.122756    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:57:56.158446    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:57:56.158457    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:57:56.173115    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:57:56.173125    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:57:56.188127    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:57:56.188145    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:57:56.199751    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:57:56.199762    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:57:56.217127    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:57:56.217137    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:57:56.255843    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:57:56.255857    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:57:56.293027    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:57:56.293042    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:57:56.307231    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:57:56.307245    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:57:56.311404    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:57:56.311410    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:57:56.329538    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:57:56.329549    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:57:56.341526    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:57:56.341537    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:57:56.356129    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:57:56.356144    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:57:58.870330    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:03.872769    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:03.873254    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:03.920286    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:03.920409    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:03.936103    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:03.936200    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:03.948475    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:03.948560    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:03.959645    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:03.959730    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:03.970138    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:03.970213    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:03.980756    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:03.980831    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:03.991297    5382 logs.go:282] 0 containers: []
	W1204 12:58:03.991312    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:03.991380    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:04.001906    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:04.001922    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:04.001929    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:04.023273    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:04.023284    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:04.038249    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:04.038261    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:04.042823    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:04.042830    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:04.078248    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:04.078262    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:04.092753    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:04.092765    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:04.106940    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:04.106951    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:04.118620    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:04.118634    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:04.143596    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:04.143607    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:04.157353    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:04.157363    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:04.194972    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:04.194983    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:04.232726    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:04.232737    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:04.244621    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:04.244637    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:04.257510    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:04.257524    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:04.275405    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:04.275414    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:04.290483    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:04.290493    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:04.302238    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:04.302253    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:06.818322    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:11.821134    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:11.821355    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:11.841153    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:11.841260    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:11.855327    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:11.855414    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:11.867658    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:11.867741    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:11.878508    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:11.878590    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:11.889019    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:11.889090    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:11.900564    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:11.900632    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:11.913775    5382 logs.go:282] 0 containers: []
	W1204 12:58:11.913787    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:11.913870    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:11.924486    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:11.924508    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:11.924514    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:11.960060    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:11.960075    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:11.977254    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:11.977271    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:11.990646    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:11.990656    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:12.006248    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:12.006259    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:12.019582    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:12.019593    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:12.031106    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:12.031118    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:12.035305    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:12.035312    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:12.048946    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:12.048957    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:12.072544    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:12.072552    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:12.084391    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:12.084402    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:12.121390    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:12.121398    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:12.135070    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:12.135100    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:12.176741    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:12.176751    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:12.194583    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:12.194593    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:12.206853    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:12.206863    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:12.223923    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:12.223934    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:14.739797    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:19.742282    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:19.742487    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:19.756323    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:19.756423    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:19.767813    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:19.767890    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:19.778092    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:19.778172    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:19.788745    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:19.788820    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:19.799162    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:19.799250    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:19.809490    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:19.809567    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:19.819355    5382 logs.go:282] 0 containers: []
	W1204 12:58:19.819369    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:19.819433    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:19.830066    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:19.830085    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:19.830090    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:19.845063    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:19.845076    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:19.859780    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:19.859794    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:19.871528    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:19.871538    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:19.908243    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:19.908251    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:19.941899    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:19.941910    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:19.981497    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:19.981510    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:20.006445    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:20.006457    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:20.024480    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:20.024491    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:20.036552    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:20.036565    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:20.052226    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:20.052240    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:20.064139    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:20.064150    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:20.078061    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:20.078072    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:20.102145    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:20.102153    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:20.106647    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:20.106654    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:20.118015    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:20.118025    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:20.133376    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:20.133386    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:22.652482    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:27.654860    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:27.655125    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:27.678532    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:27.678659    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:27.694215    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:27.694295    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:27.706833    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:27.706900    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:27.717722    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:27.717790    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:27.728206    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:27.728280    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:27.740212    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:27.740275    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:27.750285    5382 logs.go:282] 0 containers: []
	W1204 12:58:27.750304    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:27.750360    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:27.761669    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:27.761689    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:27.761694    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:27.775551    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:27.775567    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:27.789212    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:27.789226    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:27.803443    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:27.803453    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:27.814801    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:27.814817    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:27.819267    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:27.819275    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:27.853628    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:27.853643    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:27.865357    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:27.865370    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:27.882700    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:27.882711    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:27.904271    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:27.904281    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:27.928390    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:27.928397    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:27.945497    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:27.945508    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:27.984379    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:27.984391    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:28.022570    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:28.022583    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:28.037296    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:28.037309    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:28.059725    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:28.059740    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:28.074820    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:28.074833    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:30.589622    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:35.592324    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:35.592447    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:35.603415    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:35.603493    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:35.614289    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:35.614363    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:35.624356    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:35.624428    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:35.634494    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:35.634575    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:35.645254    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:35.645324    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:35.655922    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:35.656006    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:35.666605    5382 logs.go:282] 0 containers: []
	W1204 12:58:35.666616    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:35.666675    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:35.676755    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:35.676774    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:35.676779    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:35.698936    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:35.698947    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:35.711030    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:35.711040    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:35.715394    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:35.715400    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:35.729762    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:35.729772    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:35.740820    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:35.740831    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:35.752660    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:35.752671    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:35.789089    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:35.789098    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:35.826178    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:35.826188    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:35.841325    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:35.841336    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:35.856311    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:35.856322    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:35.890202    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:35.890215    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:35.904029    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:35.904039    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:35.916438    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:35.916450    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:35.934023    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:35.934032    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:35.948430    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:35.948440    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:35.959521    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:35.959531    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:38.485029    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:43.487530    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:43.487884    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:43.515662    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:43.515820    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:43.535165    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:43.535261    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:43.549678    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:43.549764    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:43.561195    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:43.561280    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:43.571692    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:43.571762    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:43.582368    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:43.582438    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:43.600345    5382 logs.go:282] 0 containers: []
	W1204 12:58:43.600356    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:43.600425    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:43.611454    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:43.611472    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:43.611478    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:43.650247    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:43.650255    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:43.664089    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:43.664101    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:43.675517    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:43.675529    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:43.687375    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:43.687389    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:43.705298    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:43.705309    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:43.716495    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:43.716505    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:43.730715    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:43.730728    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:43.745424    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:43.745435    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:43.757331    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:43.757344    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:43.769406    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:43.769417    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:43.773512    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:43.773520    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:43.810208    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:43.810219    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:43.847662    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:43.847673    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:43.859587    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:43.859598    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:43.877013    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:43.877024    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:43.890980    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:43.890994    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:46.418166    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:51.420565    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:51.420753    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:51.436577    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:51.436665    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:51.449278    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:51.449352    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:51.459856    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:51.459931    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:51.470970    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:51.471046    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:51.482253    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:51.482325    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:51.492552    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:51.492624    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:51.502467    5382 logs.go:282] 0 containers: []
	W1204 12:58:51.502480    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:51.502572    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:51.512983    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:51.513002    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:51.513008    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:51.527300    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:51.527311    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:51.568565    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:51.568577    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:51.582358    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:51.582367    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:51.605496    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:51.605504    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:51.619751    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:51.619762    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:51.631444    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:51.631454    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:51.646555    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:51.646567    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:51.661998    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:51.662010    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:51.680506    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:51.680521    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:51.696237    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:51.696249    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:51.730235    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:51.730246    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:51.741742    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:51.741753    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:58:51.759146    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:51.759158    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:51.798190    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:51.798204    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:51.803258    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:51.803268    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:51.821013    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:51.821027    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:54.336562    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:58:59.339161    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:58:59.339295    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:58:59.351803    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:58:59.351888    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:58:59.362422    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:58:59.362501    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:58:59.400348    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:58:59.400429    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:58:59.413638    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:58:59.413718    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:58:59.424024    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:58:59.424103    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:58:59.438421    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:58:59.438493    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:58:59.448474    5382 logs.go:282] 0 containers: []
	W1204 12:58:59.448484    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:58:59.448542    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:58:59.459792    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:58:59.459814    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:58:59.459820    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:58:59.471444    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:58:59.471458    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:58:59.483368    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:58:59.483379    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:58:59.495442    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:58:59.495453    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:58:59.511887    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:58:59.511897    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:58:59.547055    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:58:59.547067    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:58:59.560992    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:58:59.561005    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:58:59.598091    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:58:59.598102    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:58:59.609776    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:58:59.609787    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:58:59.623579    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:58:59.623590    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:58:59.638170    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:58:59.638184    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:58:59.649672    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:58:59.649684    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:58:59.677213    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:58:59.677221    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:58:59.690587    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:58:59.690599    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:58:59.730246    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:58:59.730257    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:58:59.734615    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:58:59.734622    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:58:59.749778    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:58:59.749791    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:02.269006    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:07.271346    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:07.271530    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:07.284689    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:07.284773    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:07.295892    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:07.295971    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:07.307189    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:07.307264    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:07.318228    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:07.318314    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:07.333483    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:07.333560    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:07.344352    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:07.344421    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:07.356710    5382 logs.go:282] 0 containers: []
	W1204 12:59:07.356721    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:07.356795    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:07.368990    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:07.369007    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:07.369014    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:07.383972    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:07.383987    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:07.396015    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:07.396026    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:07.407065    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:07.407076    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:07.446180    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:07.446192    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:07.482728    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:07.482740    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:07.496396    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:07.496407    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:07.508415    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:07.508426    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:07.528993    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:07.529007    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:07.547774    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:07.547784    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:07.570322    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:07.570331    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:07.581965    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:07.581976    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:07.586595    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:07.586603    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:07.601162    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:07.601171    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:07.618180    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:07.618192    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:07.631358    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:07.631371    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:07.646970    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:07.646983    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:10.189130    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:15.189540    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:15.189692    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:15.204486    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:15.204576    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:15.216217    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:15.216300    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:15.226926    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:15.226997    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:15.238170    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:15.238247    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:15.248959    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:15.249038    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:15.259225    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:15.259296    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:15.269422    5382 logs.go:282] 0 containers: []
	W1204 12:59:15.269435    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:15.269498    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:15.285470    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:15.285488    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:15.285494    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:15.324099    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:15.324107    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:15.328169    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:15.328176    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:15.342054    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:15.342067    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:15.381936    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:15.381951    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:15.395019    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:15.395030    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:15.411662    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:15.411674    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:15.449513    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:15.449532    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:15.469531    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:15.469547    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:15.482680    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:15.482693    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:15.499787    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:15.499799    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:15.511816    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:15.511829    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:15.535693    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:15.535708    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:15.549451    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:15.549460    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:15.564124    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:15.564134    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:15.577424    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:15.577438    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:15.596418    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:15.596438    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:18.116099    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:23.118593    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:23.118822    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:23.134135    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:23.134233    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:23.145774    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:23.145854    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:23.157499    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:23.157580    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:23.169600    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:23.169684    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:23.181852    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:23.181938    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:23.192762    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:23.192837    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:23.204529    5382 logs.go:282] 0 containers: []
	W1204 12:59:23.204541    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:23.204606    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:23.215112    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:23.215130    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:23.215135    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:23.232394    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:23.232408    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:23.246398    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:23.246411    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:23.285703    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:23.285710    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:23.300788    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:23.300804    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:23.312928    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:23.312942    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:23.325047    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:23.325061    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:23.350277    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:23.350290    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:23.364137    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:23.364150    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:23.368520    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:23.368532    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:23.406098    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:23.406109    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:23.421778    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:23.421787    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:23.437063    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:23.437075    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:23.455944    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:23.455957    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:23.500929    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:23.500943    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:23.515724    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:23.515736    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:23.528135    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:23.528146    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:26.042429    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:31.044858    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:31.045290    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:31.076977    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:31.077131    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:31.095078    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:31.095184    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:31.109491    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:31.109568    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:31.121418    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:31.121519    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:31.134106    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:31.134185    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:31.148047    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:31.148121    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:31.159372    5382 logs.go:282] 0 containers: []
	W1204 12:59:31.159390    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:31.159455    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:31.175852    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:31.175871    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:31.175877    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:31.182166    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:31.182180    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:31.199159    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:31.199168    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:31.215544    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:31.215553    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:31.239873    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:31.239886    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:31.258196    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:31.258211    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:31.273408    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:31.273421    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:31.286298    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:31.286312    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:31.329494    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:31.329507    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:31.367111    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:31.367127    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:31.407001    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:31.407022    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:31.424223    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:31.424234    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:31.436872    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:31.436884    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:31.460387    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:31.460396    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:31.477774    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:31.477786    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:31.495405    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:31.495414    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:31.507562    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:31.507576    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:34.022295    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:39.024749    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:39.024903    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:39.037862    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:39.037941    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:39.049769    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:39.049852    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:39.061023    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:39.061108    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:39.073756    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:39.073845    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:39.085208    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:39.085287    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:39.097605    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:39.097697    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:39.108693    5382 logs.go:282] 0 containers: []
	W1204 12:59:39.108706    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:39.108815    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:39.121615    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:39.121632    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:39.121637    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:39.136408    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:39.136421    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:39.154156    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:39.154172    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:39.166949    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:39.166963    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:39.187752    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:39.187764    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:39.199984    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:39.199996    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:39.223464    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:39.223478    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:39.236108    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:39.236118    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:39.274160    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:39.274177    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:39.279057    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:39.279070    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:39.294967    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:39.294977    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:39.311988    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:39.312006    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:39.359762    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:39.359778    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:39.373600    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:39.373612    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:39.417335    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:39.417347    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:39.431835    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:39.431845    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:39.443795    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:39.443807    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:41.959895    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:46.962227    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:46.962301    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:46.978531    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:46.978611    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:46.990414    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:46.990498    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:47.001806    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:47.001891    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:47.013695    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:47.013774    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:47.025532    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:47.025613    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:47.037999    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:47.038079    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:47.052598    5382 logs.go:282] 0 containers: []
	W1204 12:59:47.052641    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:47.052714    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:47.066158    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:47.066192    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:47.066203    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:47.106200    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:47.106215    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:47.121309    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:47.121324    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:47.135408    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:47.135425    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:47.158724    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:47.158738    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:47.163529    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:47.163539    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:47.202164    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:47.202176    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:47.222540    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:47.222553    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:47.237280    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:47.237292    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:47.277137    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:47.277154    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:47.292308    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:47.292326    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:47.304997    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:47.305007    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:47.316804    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:47.316818    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:47.328675    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:47.328686    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:47.340686    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:47.340697    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:47.355986    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:47.355999    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:47.367431    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:47.367441    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:49.880776    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 12:59:54.881564    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 12:59:54.881618    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 12:59:54.893279    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 12:59:54.893350    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 12:59:54.905033    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 12:59:54.905115    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 12:59:54.916749    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 12:59:54.916828    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 12:59:54.927281    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 12:59:54.927354    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 12:59:54.939183    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 12:59:54.939260    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 12:59:54.950837    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 12:59:54.950921    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 12:59:54.961841    5382 logs.go:282] 0 containers: []
	W1204 12:59:54.961860    5382 logs.go:284] No container was found matching "kindnet"
	I1204 12:59:54.961948    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 12:59:54.973434    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 12:59:54.973449    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 12:59:54.973455    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 12:59:54.978006    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 12:59:54.978014    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 12:59:55.015981    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 12:59:55.015990    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 12:59:55.056634    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 12:59:55.056650    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 12:59:55.069608    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 12:59:55.069621    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 12:59:55.085520    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 12:59:55.085528    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 12:59:55.103392    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 12:59:55.103405    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 12:59:55.118037    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 12:59:55.118048    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 12:59:55.132717    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 12:59:55.132729    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 12:59:55.148332    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 12:59:55.148345    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 12:59:55.164138    5382 logs.go:123] Gathering logs for container status ...
	I1204 12:59:55.164151    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 12:59:55.176865    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 12:59:55.176876    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 12:59:55.213953    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 12:59:55.213967    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 12:59:55.225955    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 12:59:55.225968    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 12:59:55.243730    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 12:59:55.243744    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 12:59:55.255193    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 12:59:55.255204    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 12:59:55.267569    5382 logs.go:123] Gathering logs for Docker ...
	I1204 12:59:55.267580    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 12:59:57.791884    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:02.793108    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:02.793209    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:02.804653    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 13:00:02.804744    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:02.816533    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 13:00:02.816620    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:02.829116    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 13:00:02.829197    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:02.840557    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 13:00:02.840643    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:02.851432    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 13:00:02.851516    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:02.862690    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 13:00:02.862774    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:02.881409    5382 logs.go:282] 0 containers: []
	W1204 13:00:02.881422    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:02.881494    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:02.893801    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 13:00:02.893822    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 13:00:02.893829    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 13:00:02.906460    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 13:00:02.906477    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 13:00:02.922070    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 13:00:02.922083    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 13:00:02.937265    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:02.937276    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:02.941531    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:02.941539    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:02.980863    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 13:00:02.980877    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 13:00:02.993628    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:02.993640    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:03.034789    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 13:00:03.034808    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 13:00:03.073000    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 13:00:03.073012    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 13:00:03.091613    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 13:00:03.091623    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 13:00:03.103509    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 13:00:03.103521    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 13:00:03.120750    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 13:00:03.120761    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 13:00:03.132012    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 13:00:03.132024    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 13:00:03.152850    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 13:00:03.152862    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 13:00:03.168202    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:00:03.168213    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:03.179761    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 13:00:03.179773    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 13:00:03.197327    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:03.197339    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:05.722342    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:10.723729    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:10.723835    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:00:10.742646    5382 logs.go:282] 2 containers: [ed74b1bddfaf 01a8a4e18f3f]
	I1204 13:00:10.742728    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:00:10.754594    5382 logs.go:282] 2 containers: [da31b3465431 7a4a4f7d1323]
	I1204 13:00:10.754679    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:00:10.767236    5382 logs.go:282] 1 containers: [7c9a4049d5a4]
	I1204 13:00:10.767379    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:00:10.779597    5382 logs.go:282] 2 containers: [5e1fbcdee494 7b2edfde1470]
	I1204 13:00:10.779676    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:00:10.790895    5382 logs.go:282] 1 containers: [8fc818b3ae37]
	I1204 13:00:10.790975    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:00:10.802930    5382 logs.go:282] 2 containers: [c76efbb59e4f 62e56b454444]
	I1204 13:00:10.803048    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:00:10.814666    5382 logs.go:282] 0 containers: []
	W1204 13:00:10.814676    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:00:10.814743    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:00:10.826367    5382 logs.go:282] 2 containers: [1691e82b37a6 42764af0d886]
	I1204 13:00:10.826385    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:00:10.826390    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:00:10.867237    5382 logs.go:123] Gathering logs for kube-apiserver [ed74b1bddfaf] ...
	I1204 13:00:10.867257    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed74b1bddfaf"
	I1204 13:00:10.882668    5382 logs.go:123] Gathering logs for etcd [7a4a4f7d1323] ...
	I1204 13:00:10.882686    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a4a4f7d1323"
	I1204 13:00:10.897158    5382 logs.go:123] Gathering logs for kube-scheduler [5e1fbcdee494] ...
	I1204 13:00:10.897173    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e1fbcdee494"
	I1204 13:00:10.910173    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:00:10.910185    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:00:10.915101    5382 logs.go:123] Gathering logs for kube-apiserver [01a8a4e18f3f] ...
	I1204 13:00:10.915113    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01a8a4e18f3f"
	I1204 13:00:10.953887    5382 logs.go:123] Gathering logs for etcd [da31b3465431] ...
	I1204 13:00:10.953903    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da31b3465431"
	I1204 13:00:10.968009    5382 logs.go:123] Gathering logs for kube-controller-manager [62e56b454444] ...
	I1204 13:00:10.968022    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e56b454444"
	I1204 13:00:10.983125    5382 logs.go:123] Gathering logs for storage-provisioner [1691e82b37a6] ...
	I1204 13:00:10.983135    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1691e82b37a6"
	I1204 13:00:10.994918    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:00:10.994929    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:00:11.007164    5382 logs.go:123] Gathering logs for coredns [7c9a4049d5a4] ...
	I1204 13:00:11.007175    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9a4049d5a4"
	I1204 13:00:11.022405    5382 logs.go:123] Gathering logs for kube-scheduler [7b2edfde1470] ...
	I1204 13:00:11.022416    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b2edfde1470"
	I1204 13:00:11.037247    5382 logs.go:123] Gathering logs for kube-proxy [8fc818b3ae37] ...
	I1204 13:00:11.037256    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fc818b3ae37"
	I1204 13:00:11.052864    5382 logs.go:123] Gathering logs for kube-controller-manager [c76efbb59e4f] ...
	I1204 13:00:11.052876    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76efbb59e4f"
	I1204 13:00:11.071191    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:00:11.071201    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:00:11.094314    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:00:11.094322    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:00:11.130406    5382 logs.go:123] Gathering logs for storage-provisioner [42764af0d886] ...
	I1204 13:00:11.130417    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42764af0d886"
	I1204 13:00:13.647783    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:18.648203    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:18.648239    5382 kubeadm.go:597] duration metric: took 4m4.118209875s to restartPrimaryControlPlane
	W1204 13:00:18.648268    5382 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 13:00:18.648283    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1204 13:00:19.701447    5382 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.053139667s)
	I1204 13:00:19.701528    5382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 13:00:19.706325    5382 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 13:00:19.709243    5382 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 13:00:19.711825    5382 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 13:00:19.711832    5382 kubeadm.go:157] found existing configuration files:
	
	I1204 13:00:19.711860    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/admin.conf
	I1204 13:00:19.714630    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 13:00:19.714656    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 13:00:19.717722    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/kubelet.conf
	I1204 13:00:19.720205    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 13:00:19.720235    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 13:00:19.722855    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/controller-manager.conf
	I1204 13:00:19.726065    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 13:00:19.726095    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 13:00:19.729199    5382 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/scheduler.conf
	I1204 13:00:19.731649    5382 kubeadm.go:163] "https://control-plane.minikube.internal:63857" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:63857 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 13:00:19.731674    5382 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 13:00:19.734579    5382 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 13:00:19.751736    5382 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1204 13:00:19.751791    5382 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 13:00:19.805903    5382 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 13:00:19.805962    5382 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 13:00:19.806018    5382 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 13:00:19.855470    5382 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 13:00:19.859449    5382 out.go:235]   - Generating certificates and keys ...
	I1204 13:00:19.859488    5382 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 13:00:19.859522    5382 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 13:00:19.859563    5382 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 13:00:19.859593    5382 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 13:00:19.859631    5382 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 13:00:19.859671    5382 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 13:00:19.859705    5382 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 13:00:19.859748    5382 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 13:00:19.859785    5382 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 13:00:19.859827    5382 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 13:00:19.859853    5382 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 13:00:19.859889    5382 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 13:00:19.912374    5382 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 13:00:20.104793    5382 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 13:00:20.156622    5382 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 13:00:20.244183    5382 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 13:00:20.273477    5382 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 13:00:20.273885    5382 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 13:00:20.273910    5382 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 13:00:20.361109    5382 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 13:00:20.368053    5382 out.go:235]   - Booting up control plane ...
	I1204 13:00:20.368102    5382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 13:00:20.368157    5382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 13:00:20.368205    5382 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 13:00:20.368243    5382 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 13:00:20.368333    5382 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 13:00:24.864619    5382 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501565 seconds
	I1204 13:00:24.864676    5382 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 13:00:24.869694    5382 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 13:00:25.379939    5382 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 13:00:25.380136    5382 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-827000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 13:00:25.885095    5382 kubeadm.go:310] [bootstrap-token] Using token: pfiqw0.szxm27i1cbji286z
	I1204 13:00:25.889175    5382 out.go:235]   - Configuring RBAC rules ...
	I1204 13:00:25.889237    5382 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 13:00:25.889279    5382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 13:00:25.893072    5382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 13:00:25.893988    5382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 13:00:25.894916    5382 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 13:00:25.895699    5382 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 13:00:25.898744    5382 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 13:00:26.069436    5382 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 13:00:26.291324    5382 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 13:00:26.291888    5382 kubeadm.go:310] 
	I1204 13:00:26.291923    5382 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 13:00:26.291927    5382 kubeadm.go:310] 
	I1204 13:00:26.291966    5382 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 13:00:26.291971    5382 kubeadm.go:310] 
	I1204 13:00:26.291983    5382 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 13:00:26.292020    5382 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 13:00:26.292058    5382 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 13:00:26.292066    5382 kubeadm.go:310] 
	I1204 13:00:26.292101    5382 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 13:00:26.292118    5382 kubeadm.go:310] 
	I1204 13:00:26.292145    5382 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 13:00:26.292147    5382 kubeadm.go:310] 
	I1204 13:00:26.292177    5382 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 13:00:26.292217    5382 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 13:00:26.292252    5382 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 13:00:26.292255    5382 kubeadm.go:310] 
	I1204 13:00:26.292302    5382 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 13:00:26.292349    5382 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 13:00:26.292354    5382 kubeadm.go:310] 
	I1204 13:00:26.292401    5382 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pfiqw0.szxm27i1cbji286z \
	I1204 13:00:26.292462    5382 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7d8c9ff99071ccd6c2c996325e17b7e464f4a0a980b55e37863d1d8ca70e7d83 \
	I1204 13:00:26.292475    5382 kubeadm.go:310] 	--control-plane 
	I1204 13:00:26.292479    5382 kubeadm.go:310] 
	I1204 13:00:26.292542    5382 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 13:00:26.292546    5382 kubeadm.go:310] 
	I1204 13:00:26.292583    5382 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pfiqw0.szxm27i1cbji286z \
	I1204 13:00:26.292640    5382 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7d8c9ff99071ccd6c2c996325e17b7e464f4a0a980b55e37863d1d8ca70e7d83 
	I1204 13:00:26.293835    5382 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 13:00:26.293848    5382 cni.go:84] Creating CNI manager for ""
	I1204 13:00:26.293858    5382 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 13:00:26.297798    5382 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 13:00:26.305953    5382 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 13:00:26.309038    5382 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 13:00:26.313855    5382 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 13:00:26.313902    5382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 13:00:26.313928    5382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-827000 minikube.k8s.io/updated_at=2024_12_04T13_00_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b071a038f2c56b751b45082bb8c33ba68a652c59 minikube.k8s.io/name=stopped-upgrade-827000 minikube.k8s.io/primary=true
	I1204 13:00:26.362903    5382 kubeadm.go:1113] duration metric: took 49.038375ms to wait for elevateKubeSystemPrivileges
	I1204 13:00:26.362916    5382 ops.go:34] apiserver oom_adj: -16
	I1204 13:00:26.362923    5382 kubeadm.go:394] duration metric: took 4m11.846455375s to StartCluster
	I1204 13:00:26.362934    5382 settings.go:142] acquiring lock: {Name:mkc9bc1437987e3de306bb25e3c2f4effe0b8b57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:00:26.363036    5382 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:00:26.363509    5382 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/kubeconfig: {Name:mk18d42ed20876d07306ef2e0f2006c5dc1a1320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:00:26.363741    5382 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:00:26.363802    5382 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 13:00:26.363857    5382 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-827000"
	I1204 13:00:26.363865    5382 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-827000"
	W1204 13:00:26.363868    5382 addons.go:243] addon storage-provisioner should already be in state true
	I1204 13:00:26.363880    5382 host.go:66] Checking if "stopped-upgrade-827000" exists ...
	I1204 13:00:26.363923    5382 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-827000"
	I1204 13:00:26.363948    5382 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:00:26.363971    5382 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-827000"
	I1204 13:00:26.365131    5382 kapi.go:59] client config for stopped-upgrade-827000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/stopped-upgrade-827000/client.key", CAFile:"/Users/jenkins/minikube-integration/19985-1334/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10452b740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 13:00:26.365254    5382 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-827000"
	W1204 13:00:26.365258    5382 addons.go:243] addon default-storageclass should already be in state true
	I1204 13:00:26.365266    5382 host.go:66] Checking if "stopped-upgrade-827000" exists ...
	I1204 13:00:26.367798    5382 out.go:177] * Verifying Kubernetes components...
	I1204 13:00:26.368170    5382 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 13:00:26.371029    5382 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 13:00:26.371035    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	I1204 13:00:26.373772    5382 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 13:00:26.377943    5382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 13:00:26.380764    5382 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 13:00:26.380773    5382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 13:00:26.380781    5382 sshutil.go:53] new ssh client: &{IP:localhost Port:63822 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/stopped-upgrade-827000/id_rsa Username:docker}
	I1204 13:00:26.461665    5382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 13:00:26.467707    5382 api_server.go:52] waiting for apiserver process to appear ...
	I1204 13:00:26.467789    5382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 13:00:26.472638    5382 api_server.go:72] duration metric: took 108.882208ms to wait for apiserver process to appear ...
	I1204 13:00:26.472648    5382 api_server.go:88] waiting for apiserver healthz status ...
	I1204 13:00:26.472658    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:26.479272    5382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 13:00:26.531905    5382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 13:00:26.868217    5382 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 13:00:26.868229    5382 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 13:00:31.474840    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:31.474909    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:36.475453    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:36.475474    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:41.475928    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:41.475954    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:46.476534    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:46.476561    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:51.477319    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:51.477351    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:00:56.478262    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:00:56.478287    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1204 13:00:56.870050    5382 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1204 13:00:56.874047    5382 out.go:177] * Enabled addons: storage-provisioner
	I1204 13:00:56.882055    5382 addons.go:510] duration metric: took 30.517914958s for enable addons: enabled=[storage-provisioner]
	I1204 13:01:01.479579    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:01.479619    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:01:06.481118    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:06.481149    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:01:11.482995    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:11.483025    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:01:16.483371    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:16.483394    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:01:21.485723    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:21.485769    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:01:26.488090    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:26.488237    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:01:26.499272    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:01:26.499349    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:01:26.509879    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:01:26.509959    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:01:26.520492    5382 logs.go:282] 2 containers: [f44a8e16418a f26bc19e5662]
	I1204 13:01:26.520570    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:01:26.530861    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:01:26.530933    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:01:26.541180    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:01:26.541259    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:01:26.551709    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:01:26.551784    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:01:26.561936    5382 logs.go:282] 0 containers: []
	W1204 13:01:26.561949    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:01:26.562015    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:01:26.572588    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:01:26.572601    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:01:26.572607    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:01:26.587413    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:01:26.587425    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:01:26.598758    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:01:26.598768    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:01:26.610542    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:01:26.610555    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:01:26.630845    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:01:26.630858    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:01:26.642457    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:01:26.642469    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:01:26.667155    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:01:26.667168    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:01:26.705487    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:01:26.705495    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:01:26.720083    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:01:26.720093    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:01:26.744974    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:01:26.744985    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:01:26.756920    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:01:26.756933    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:01:26.769932    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:01:26.769944    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:01:26.774173    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:01:26.774181    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:01:29.309822    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:01:34.312336    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:34.312930    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:01:34.359257    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:01:34.359410    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:01:34.380007    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:01:34.380122    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:01:34.395894    5382 logs.go:282] 2 containers: [f44a8e16418a f26bc19e5662]
	I1204 13:01:34.395975    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:01:34.408460    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:01:34.408541    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:01:34.423454    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:01:34.423525    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:01:34.434779    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:01:34.434859    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:01:34.445280    5382 logs.go:282] 0 containers: []
	W1204 13:01:34.445295    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:01:34.445363    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:01:34.455854    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:01:34.455869    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:01:34.455875    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:01:34.460499    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:01:34.460506    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:01:34.499753    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:01:34.499768    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:01:34.514421    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:01:34.514434    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:01:34.533900    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:01:34.533911    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:01:34.564668    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:01:34.564681    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:01:34.579838    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:01:34.579852    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:01:34.616984    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:01:34.616993    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:01:34.628516    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:01:34.628529    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:01:34.643414    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:01:34.643427    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:01:34.655334    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:01:34.655347    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:01:34.672511    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:01:34.672522    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:01:34.684356    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:01:34.684369    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:01:37.209737    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:01:42.212504    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:42.212728    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:01:42.236008    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:01:42.236138    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:01:42.251952    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:01:42.252035    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:01:42.264387    5382 logs.go:282] 2 containers: [f44a8e16418a f26bc19e5662]
	I1204 13:01:42.264468    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:01:42.275756    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:01:42.275834    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:01:42.286182    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:01:42.286260    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:01:42.296678    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:01:42.296752    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:01:42.306430    5382 logs.go:282] 0 containers: []
	W1204 13:01:42.306443    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:01:42.306507    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:01:42.316805    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:01:42.316821    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:01:42.316826    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:01:42.351499    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:01:42.351513    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:01:42.365775    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:01:42.365785    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:01:42.377424    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:01:42.377437    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:01:42.402316    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:01:42.402325    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:01:42.414203    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:01:42.414216    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:01:42.431426    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:01:42.431437    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:01:42.470005    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:01:42.470015    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:01:42.474402    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:01:42.474412    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:01:42.488159    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:01:42.488169    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:01:42.499346    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:01:42.499360    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:01:42.511332    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:01:42.511346    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:01:42.528713    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:01:42.528723    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:01:45.042627    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:01:50.045531    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:50.045963    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:01:50.083148    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:01:50.083286    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:01:50.102112    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:01:50.102250    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:01:50.117018    5382 logs.go:282] 2 containers: [f44a8e16418a f26bc19e5662]
	I1204 13:01:50.117102    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:01:50.129009    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:01:50.129089    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:01:50.139786    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:01:50.139850    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:01:50.160429    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:01:50.160511    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:01:50.170808    5382 logs.go:282] 0 containers: []
	W1204 13:01:50.170825    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:01:50.170892    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:01:50.180940    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:01:50.180959    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:01:50.180965    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:01:50.186273    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:01:50.186280    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:01:50.206192    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:01:50.206204    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:01:50.217617    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:01:50.217629    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:01:50.233092    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:01:50.233104    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:01:50.258519    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:01:50.258533    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:01:50.269809    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:01:50.269823    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:01:50.308556    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:01:50.308570    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:01:50.327673    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:01:50.327684    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:01:50.339650    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:01:50.339663    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:01:50.351286    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:01:50.351300    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:01:50.371857    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:01:50.371866    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:01:50.383465    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:01:50.383478    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:01:52.937587    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:01:57.940130    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:01:57.940621    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:01:57.973036    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:01:57.973189    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:01:57.992898    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:01:57.993008    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:01:58.007114    5382 logs.go:282] 2 containers: [f44a8e16418a f26bc19e5662]
	I1204 13:01:58.007197    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:01:58.019159    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:01:58.019238    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:01:58.032656    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:01:58.032733    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:01:58.043108    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:01:58.043196    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:01:58.053786    5382 logs.go:282] 0 containers: []
	W1204 13:01:58.053797    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:01:58.053868    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:01:58.064246    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:01:58.064263    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:01:58.064268    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:01:58.078999    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:01:58.079011    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:01:58.091518    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:01:58.091529    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:01:58.106270    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:01:58.106281    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:01:58.130146    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:01:58.130156    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:01:58.141338    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:01:58.141350    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:01:58.155059    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:01:58.155075    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:01:58.191535    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:01:58.191544    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:01:58.195636    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:01:58.195642    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:01:58.229750    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:01:58.229762    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:01:58.247150    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:01:58.247165    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:01:58.258479    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:01:58.258489    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:01:58.270164    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:01:58.270174    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:02:00.793334    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:02:05.796096    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:02:05.796551    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:02:05.841248    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:02:05.841404    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:02:05.862538    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:02:05.862673    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:02:05.878320    5382 logs.go:282] 2 containers: [f44a8e16418a f26bc19e5662]
	I1204 13:02:05.878410    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:02:05.890373    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:02:05.890448    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:02:05.900922    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:02:05.901005    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:02:05.914944    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:02:05.915020    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:02:05.925187    5382 logs.go:282] 0 containers: []
	W1204 13:02:05.925200    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:02:05.925258    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:02:05.935591    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:02:05.935605    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:02:05.935610    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:02:05.956301    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:02:05.956315    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:02:05.974768    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:02:05.974781    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:02:05.988750    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:02:05.988760    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:02:06.000114    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:02:06.000123    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:02:06.014700    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:02:06.014713    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:02:06.028240    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:02:06.028253    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:02:06.045912    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:02:06.045923    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:02:06.084479    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:02:06.084489    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:02:06.088684    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:02:06.088693    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:02:06.122625    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:02:06.122639    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:02:06.134498    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:02:06.134511    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:02:06.146181    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:02:06.146190    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:02:08.672484    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:02:13.675088    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:02:13.675647    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:02:13.721113    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:02:13.721282    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:02:13.739851    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:02:13.739967    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:02:13.754330    5382 logs.go:282] 2 containers: [f44a8e16418a f26bc19e5662]
	I1204 13:02:13.754419    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:02:13.766836    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:02:13.766906    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:02:13.777714    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:02:13.777799    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:02:13.797279    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:02:13.797351    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:02:13.807598    5382 logs.go:282] 0 containers: []
	W1204 13:02:13.807609    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:02:13.807673    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:02:13.820510    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:02:13.820526    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:02:13.820531    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:02:13.832850    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:02:13.832863    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:02:13.856712    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:02:13.856722    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:02:13.890741    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:02:13.890756    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:02:13.905481    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:02:13.905490    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:02:13.919902    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:02:13.919914    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:02:13.931853    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:02:13.931868    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:02:13.947720    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:02:13.947734    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:02:13.959838    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:02:13.959852    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:02:13.978323    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:02:13.978335    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:02:13.990283    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:02:13.990294    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:02:14.033038    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:02:14.033047    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:02:14.037349    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:02:14.037356    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:02:16.551199    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:02:21.553499    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:02:21.553939    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:02:21.588520    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:02:21.588669    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:02:21.609314    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:02:21.609457    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:02:21.624501    5382 logs.go:282] 2 containers: [f44a8e16418a f26bc19e5662]
	I1204 13:02:21.624582    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:02:21.636565    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:02:21.636637    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:02:21.647567    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:02:21.647643    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:02:21.658321    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:02:21.658388    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:02:21.669079    5382 logs.go:282] 0 containers: []
	W1204 13:02:21.669090    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:02:21.669147    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:02:21.680285    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:02:21.680301    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:02:21.680307    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:02:21.691611    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:02:21.691624    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:02:21.703168    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:02:21.703181    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:02:21.726132    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:02:21.726139    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:02:21.737381    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:02:21.737394    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:02:21.775771    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:02:21.775783    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:02:21.779771    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:02:21.779777    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:02:21.793786    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:02:21.793799    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:02:21.808116    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:02:21.808128    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:02:21.819994    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:02:21.820007    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:02:21.837295    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:02:21.837306    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:02:21.873163    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:02:21.873177    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:02:21.888343    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:02:21.888356    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:02:24.401903    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:02:29.404497    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:02:29.405048    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:02:29.450395    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:02:29.450553    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:02:29.474742    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:02:29.474876    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:02:29.488863    5382 logs.go:282] 2 containers: [f44a8e16418a f26bc19e5662]
	I1204 13:02:29.488942    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:02:29.500714    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:02:29.500792    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:02:29.511347    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:02:29.511424    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:02:29.525121    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:02:29.525196    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:02:29.535250    5382 logs.go:282] 0 containers: []
	W1204 13:02:29.535263    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:02:29.535320    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:02:29.545808    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:02:29.545823    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:02:29.545828    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:02:29.557451    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:02:29.557463    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:02:29.593945    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:02:29.593956    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:02:29.608545    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:02:29.608557    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:02:29.620437    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:02:29.620447    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:02:29.631722    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:02:29.631735    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:02:29.649270    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:02:29.649282    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:02:29.665116    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:02:29.665128    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:02:29.669691    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:02:29.669701    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:02:29.704559    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:02:29.704573    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:02:29.718531    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:02:29.718542    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:02:29.730241    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:02:29.730253    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:02:29.745099    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:02:29.745112    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:02:32.270849    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:02:37.273686    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:02:37.274187    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:02:37.311009    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:02:37.311162    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:02:37.331848    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:02:37.331967    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:02:37.350268    5382 logs.go:282] 2 containers: [f44a8e16418a f26bc19e5662]
	I1204 13:02:37.350350    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:02:37.362077    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:02:37.362143    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:02:37.372510    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:02:37.372578    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:02:37.382528    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:02:37.382599    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:02:37.392945    5382 logs.go:282] 0 containers: []
	W1204 13:02:37.392958    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:02:37.393023    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:02:37.403761    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:02:37.403775    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:02:37.403781    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:02:37.415528    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:02:37.415540    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:02:37.430649    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:02:37.430662    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:02:37.448151    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:02:37.448161    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:02:37.473188    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:02:37.473199    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:02:37.509262    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:02:37.509272    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:02:37.513441    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:02:37.513449    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:02:37.575028    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:02:37.575041    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:02:37.609343    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:02:37.609357    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:02:37.626201    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:02:37.626214    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:02:37.670114    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:02:37.670129    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:02:37.682953    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:02:37.682967    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:02:37.694512    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:02:37.694525    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:02:40.207877    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:02:45.210686    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:02:45.211211    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:02:45.251842    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:02:45.251992    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:02:45.273822    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:02:45.273946    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:02:45.289277    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:02:45.289352    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:02:45.312072    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:02:45.312158    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:02:45.322681    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:02:45.322750    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:02:45.336957    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:02:45.337033    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:02:45.347527    5382 logs.go:282] 0 containers: []
	W1204 13:02:45.347538    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:02:45.347607    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:02:45.358697    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:02:45.358714    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:02:45.358720    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:02:45.370216    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:02:45.370228    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:02:45.381570    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:02:45.381580    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:02:45.396194    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:02:45.396206    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:02:45.413617    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:02:45.413626    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:02:45.439972    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:02:45.439979    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:02:45.451756    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:02:45.451770    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:02:45.463892    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:02:45.463905    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:02:45.476078    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:02:45.476090    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:02:45.480472    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:02:45.480481    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:02:45.495369    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:02:45.495382    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:02:45.506970    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:02:45.506986    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:02:45.543689    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:02:45.543698    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:02:45.577951    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:02:45.577961    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:02:45.592083    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:02:45.592094    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:02:48.105703    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:02:53.106803    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:02:53.106882    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:02:53.118203    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:02:53.118274    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:02:53.130241    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:02:53.130304    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:02:53.141382    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:02:53.141450    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:02:53.152522    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:02:53.152589    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:02:53.163534    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:02:53.163608    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:02:53.176138    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:02:53.176211    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:02:53.188186    5382 logs.go:282] 0 containers: []
	W1204 13:02:53.188195    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:02:53.188254    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:02:53.199393    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:02:53.199412    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:02:53.199417    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:02:53.226249    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:02:53.226257    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:02:53.238436    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:02:53.238448    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:02:53.251147    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:02:53.251158    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:02:53.264117    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:02:53.264129    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:02:53.276592    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:02:53.276602    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:02:53.294877    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:02:53.294888    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:02:53.331591    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:02:53.331601    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:02:53.344230    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:02:53.344240    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:02:53.361168    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:02:53.361180    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:02:53.388619    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:02:53.388631    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:02:53.428657    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:02:53.428672    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:02:53.444347    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:02:53.444358    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:02:53.464491    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:02:53.464501    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:02:53.468723    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:02:53.468729    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:02:55.990485    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:03:00.993501    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:03:00.994022    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:03:01.025674    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:03:01.025816    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:03:01.045657    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:03:01.045782    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:03:01.062236    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:03:01.062324    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:03:01.074134    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:03:01.074210    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:03:01.084643    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:03:01.084718    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:03:01.095645    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:03:01.095730    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:03:01.106004    5382 logs.go:282] 0 containers: []
	W1204 13:03:01.106015    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:03:01.106079    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:03:01.115979    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:03:01.115997    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:03:01.116002    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:03:01.127968    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:03:01.127982    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:03:01.152751    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:03:01.152757    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:03:01.189413    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:03:01.189423    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:03:01.201505    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:03:01.201519    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:03:01.213056    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:03:01.213068    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:03:01.231305    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:03:01.231316    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:03:01.265762    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:03:01.265774    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:03:01.288036    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:03:01.288048    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:03:01.299560    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:03:01.299573    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:03:01.311421    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:03:01.311430    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:03:01.323046    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:03:01.323057    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:03:01.327508    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:03:01.327517    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:03:01.341807    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:03:01.341820    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:03:01.356423    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:03:01.356435    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:03:03.869393    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:03:08.871662    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:03:08.871902    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:03:08.895986    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:03:08.896112    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:03:08.912178    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:03:08.912270    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:03:08.924918    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:03:08.924994    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:03:08.938485    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:03:08.938560    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:03:08.952481    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:03:08.952551    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:03:08.963007    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:03:08.963095    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:03:08.973501    5382 logs.go:282] 0 containers: []
	W1204 13:03:08.973513    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:03:08.973579    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:03:08.983512    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:03:08.983529    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:03:08.983535    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:03:08.997418    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:03:08.997431    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:03:09.011427    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:03:09.011440    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:03:09.023227    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:03:09.023238    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:03:09.048399    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:03:09.048408    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:03:09.060260    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:03:09.060272    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:03:09.074194    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:03:09.074207    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:03:09.088246    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:03:09.088259    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:03:09.108641    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:03:09.108654    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:03:09.146513    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:03:09.146522    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:03:09.181590    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:03:09.181603    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:03:09.201090    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:03:09.201104    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:03:09.212362    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:03:09.212375    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:03:09.226903    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:03:09.226914    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:03:09.244617    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:03:09.244627    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:03:11.751322    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:03:16.752218    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:03:16.752307    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:03:16.764836    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:03:16.764907    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:03:16.776280    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:03:16.776342    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:03:16.790305    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:03:16.790397    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:03:16.801724    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:03:16.801818    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:03:16.814671    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:03:16.814749    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:03:16.826418    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:03:16.826503    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:03:16.837802    5382 logs.go:282] 0 containers: []
	W1204 13:03:16.837814    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:03:16.837883    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:03:16.849149    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:03:16.849165    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:03:16.849172    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:03:16.862642    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:03:16.862654    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:03:16.875862    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:03:16.875874    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:03:16.893680    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:03:16.893690    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:03:16.906348    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:03:16.906361    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:03:16.932835    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:03:16.932851    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:03:16.945871    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:03:16.945884    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:03:16.986497    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:03:16.986513    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:03:17.008390    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:03:17.008405    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:03:17.024456    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:03:17.024469    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:03:17.037647    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:03:17.037659    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:03:17.077123    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:03:17.077135    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:03:17.081589    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:03:17.081600    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:03:17.097359    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:03:17.097373    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:03:17.111420    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:03:17.111430    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:03:19.631552    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:03:24.634449    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:03:24.634660    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:03:24.650652    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:03:24.650740    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:03:24.666287    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:03:24.666370    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:03:24.677645    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:03:24.677729    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:03:24.688833    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:03:24.688911    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:03:24.699516    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:03:24.699594    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:03:24.710491    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:03:24.710564    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:03:24.721188    5382 logs.go:282] 0 containers: []
	W1204 13:03:24.721203    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:03:24.721263    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:03:24.736465    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:03:24.736482    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:03:24.736487    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:03:24.748750    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:03:24.748763    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:03:24.763848    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:03:24.763858    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:03:24.775400    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:03:24.775413    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:03:24.790358    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:03:24.790370    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:03:24.802423    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:03:24.802436    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:03:24.823453    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:03:24.823465    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:03:24.827922    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:03:24.827928    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:03:24.864025    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:03:24.864038    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:03:24.888744    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:03:24.888752    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:03:24.900547    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:03:24.900559    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:03:24.938918    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:03:24.938927    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:03:24.951157    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:03:24.951171    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:03:24.963286    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:03:24.963301    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:03:24.975804    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:03:24.975815    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:03:27.492211    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:03:32.494953    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:03:32.495540    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:03:32.534903    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:03:32.535059    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:03:32.556861    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:03:32.556972    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:03:32.573201    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:03:32.573293    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:03:32.590582    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:03:32.590657    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:03:32.602380    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:03:32.602463    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:03:32.614395    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:03:32.614480    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:03:32.625480    5382 logs.go:282] 0 containers: []
	W1204 13:03:32.625490    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:03:32.625560    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:03:32.636181    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:03:32.636200    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:03:32.636205    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:03:32.671649    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:03:32.671660    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:03:32.686546    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:03:32.686559    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:03:32.699919    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:03:32.699934    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:03:32.712196    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:03:32.712210    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:03:32.724094    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:03:32.724107    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:03:32.736251    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:03:32.736264    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:03:32.753017    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:03:32.753031    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:03:32.790602    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:03:32.790610    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:03:32.815793    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:03:32.815815    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:03:32.834285    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:03:32.834297    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:03:32.839028    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:03:32.839034    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:03:32.855615    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:03:32.855624    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:03:32.871994    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:03:32.872005    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:03:32.884135    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:03:32.884147    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:03:35.398844    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:03:40.401893    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:03:40.402491    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:03:40.444079    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:03:40.444247    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:03:40.466933    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:03:40.467057    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:03:40.489779    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:03:40.489868    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:03:40.502081    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:03:40.502165    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:03:40.513605    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:03:40.513679    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:03:40.526918    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:03:40.526988    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:03:40.538203    5382 logs.go:282] 0 containers: []
	W1204 13:03:40.538218    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:03:40.538275    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:03:40.549736    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:03:40.549754    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:03:40.549758    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:03:40.588831    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:03:40.588839    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:03:40.592921    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:03:40.592929    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:03:40.607407    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:03:40.607417    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:03:40.620055    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:03:40.620066    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:03:40.632423    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:03:40.632436    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:03:40.644452    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:03:40.644465    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:03:40.660369    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:03:40.660381    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:03:40.678499    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:03:40.678511    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:03:40.694376    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:03:40.694389    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:03:40.719402    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:03:40.719410    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:03:40.731569    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:03:40.731582    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:03:40.768316    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:03:40.768330    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:03:40.783753    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:03:40.783765    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:03:40.798819    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:03:40.798830    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:03:43.319304    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:03:48.321677    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:03:48.322164    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:03:48.361412    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:03:48.361555    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:03:48.383522    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:03:48.383631    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:03:48.399271    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:03:48.399359    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:03:48.412352    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:03:48.412430    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:03:48.423898    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:03:48.423976    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:03:48.435596    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:03:48.435667    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:03:48.446501    5382 logs.go:282] 0 containers: []
	W1204 13:03:48.446518    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:03:48.446570    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:03:48.457380    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:03:48.457397    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:03:48.457403    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:03:48.498734    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:03:48.498748    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:03:48.517326    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:03:48.517339    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:03:48.530410    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:03:48.530422    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:03:48.542780    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:03:48.542793    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:03:48.581388    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:03:48.581396    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:03:48.594611    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:03:48.594622    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:03:48.606422    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:03:48.606433    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:03:48.623689    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:03:48.623702    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:03:48.628062    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:03:48.628071    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:03:48.646230    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:03:48.646244    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:03:48.657504    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:03:48.657516    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:03:48.669188    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:03:48.669201    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:03:48.681281    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:03:48.681294    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:03:48.695510    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:03:48.695523    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:03:51.221060    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:03:56.223435    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:03:56.223537    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:03:56.234187    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:03:56.234263    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:03:56.244348    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:03:56.244426    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:03:56.254726    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:03:56.254811    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:03:56.265200    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:03:56.265283    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:03:56.275441    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:03:56.275518    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:03:56.285831    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:03:56.285901    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:03:56.295785    5382 logs.go:282] 0 containers: []
	W1204 13:03:56.295797    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:03:56.295860    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:03:56.305838    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:03:56.305855    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:03:56.305862    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:03:56.310597    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:03:56.310606    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:03:56.324632    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:03:56.324644    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:03:56.336388    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:03:56.336400    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:03:56.350419    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:03:56.350430    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:03:56.361774    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:03:56.361786    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:03:56.386698    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:03:56.386709    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:03:56.398342    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:03:56.398352    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:03:56.436864    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:03:56.436875    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:03:56.452641    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:03:56.452654    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:03:56.464120    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:03:56.464130    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:03:56.481336    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:03:56.481346    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:03:56.520215    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:03:56.520228    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:03:56.535600    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:03:56.535613    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:03:56.547361    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:03:56.547375    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:03:59.061338    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:04:04.063279    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:04:04.063432    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:04:04.079917    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:04:04.080002    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:04:04.092436    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:04:04.092515    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:04:04.103059    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:04:04.103135    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:04:04.113825    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:04:04.113892    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:04:04.123836    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:04:04.123901    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:04:04.134463    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:04:04.134535    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:04:04.144903    5382 logs.go:282] 0 containers: []
	W1204 13:04:04.144913    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:04:04.144968    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:04:04.157544    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:04:04.157563    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:04:04.157568    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:04:04.182586    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:04:04.182596    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:04:04.220447    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:04:04.220458    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:04:04.234386    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:04:04.234398    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:04:04.245738    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:04:04.245748    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:04:04.257076    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:04:04.257089    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:04:04.268564    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:04:04.268573    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:04:04.280219    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:04:04.280232    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:04:04.292015    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:04:04.292026    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:04:04.296185    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:04:04.296195    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:04:04.310849    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:04:04.310862    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:04:04.328141    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:04:04.328155    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:04:04.339979    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:04:04.339991    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:04:04.375632    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:04:04.375643    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:04:04.389482    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:04:04.389491    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:04:06.901780    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:04:11.904126    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:04:11.904561    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:04:11.942550    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:04:11.942693    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:04:11.963943    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:04:11.964072    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:04:11.979733    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:04:11.979822    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:04:11.993495    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:04:11.993577    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:04:12.005265    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:04:12.005340    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:04:12.017352    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:04:12.017419    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:04:12.027723    5382 logs.go:282] 0 containers: []
	W1204 13:04:12.027736    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:04:12.027803    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:04:12.038709    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:04:12.038734    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:04:12.038740    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:04:12.073913    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:04:12.073924    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:04:12.092381    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:04:12.092395    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:04:12.106280    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:04:12.106293    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:04:12.130960    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:04:12.130967    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:04:12.142465    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:04:12.142477    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:04:12.156132    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:04:12.156143    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:04:12.167665    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:04:12.167678    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:04:12.185340    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:04:12.185352    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:04:12.200766    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:04:12.200779    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:04:12.212539    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:04:12.212550    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:04:12.224361    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:04:12.224374    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:04:12.235484    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:04:12.235497    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:04:12.272065    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:04:12.272075    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:04:12.276161    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:04:12.276170    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:04:14.790110    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:04:19.791037    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:04:19.791107    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 13:04:19.802506    5382 logs.go:282] 1 containers: [472d67a9a929]
	I1204 13:04:19.802584    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 13:04:19.816189    5382 logs.go:282] 1 containers: [a92469e6aebb]
	I1204 13:04:19.816254    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 13:04:19.827248    5382 logs.go:282] 4 containers: [064b2f70468a 46d438907f00 f44a8e16418a f26bc19e5662]
	I1204 13:04:19.827310    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 13:04:19.840105    5382 logs.go:282] 1 containers: [425519c35585]
	I1204 13:04:19.840168    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 13:04:19.851283    5382 logs.go:282] 1 containers: [9f0fcc2390ec]
	I1204 13:04:19.851361    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 13:04:19.863530    5382 logs.go:282] 1 containers: [b8f74ccf6985]
	I1204 13:04:19.863611    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 13:04:19.876180    5382 logs.go:282] 0 containers: []
	W1204 13:04:19.876190    5382 logs.go:284] No container was found matching "kindnet"
	I1204 13:04:19.876241    5382 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 13:04:19.886596    5382 logs.go:282] 1 containers: [165d62e8ae53]
	I1204 13:04:19.886613    5382 logs.go:123] Gathering logs for coredns [064b2f70468a] ...
	I1204 13:04:19.886619    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064b2f70468a"
	I1204 13:04:19.899302    5382 logs.go:123] Gathering logs for coredns [46d438907f00] ...
	I1204 13:04:19.899312    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 46d438907f00"
	I1204 13:04:19.913561    5382 logs.go:123] Gathering logs for coredns [f44a8e16418a] ...
	I1204 13:04:19.913574    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f44a8e16418a"
	I1204 13:04:19.926252    5382 logs.go:123] Gathering logs for kube-scheduler [425519c35585] ...
	I1204 13:04:19.926263    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 425519c35585"
	I1204 13:04:19.941659    5382 logs.go:123] Gathering logs for Docker ...
	I1204 13:04:19.941672    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 13:04:19.965983    5382 logs.go:123] Gathering logs for etcd [a92469e6aebb] ...
	I1204 13:04:19.966001    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92469e6aebb"
	I1204 13:04:19.980839    5382 logs.go:123] Gathering logs for dmesg ...
	I1204 13:04:19.980848    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 13:04:19.985101    5382 logs.go:123] Gathering logs for describe nodes ...
	I1204 13:04:19.985113    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 13:04:20.024588    5382 logs.go:123] Gathering logs for kube-apiserver [472d67a9a929] ...
	I1204 13:04:20.024597    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 472d67a9a929"
	I1204 13:04:20.038629    5382 logs.go:123] Gathering logs for coredns [f26bc19e5662] ...
	I1204 13:04:20.038642    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26bc19e5662"
	I1204 13:04:20.053057    5382 logs.go:123] Gathering logs for kube-controller-manager [b8f74ccf6985] ...
	I1204 13:04:20.053068    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f74ccf6985"
	I1204 13:04:20.072391    5382 logs.go:123] Gathering logs for container status ...
	I1204 13:04:20.072401    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 13:04:20.084004    5382 logs.go:123] Gathering logs for kubelet ...
	I1204 13:04:20.084013    5382 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 13:04:20.123778    5382 logs.go:123] Gathering logs for kube-proxy [9f0fcc2390ec] ...
	I1204 13:04:20.123796    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f0fcc2390ec"
	I1204 13:04:20.135933    5382 logs.go:123] Gathering logs for storage-provisioner [165d62e8ae53] ...
	I1204 13:04:20.135950    5382 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 165d62e8ae53"
	I1204 13:04:22.649074    5382 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 13:04:27.651946    5382 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 13:04:27.659364    5382 out.go:201] 
	W1204 13:04:27.664281    5382 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1204 13:04:27.664309    5382 out.go:270] * 
	* 
	W1204 13:04:27.666735    5382 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:04:27.676303    5382 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-827000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.96s)

                                                
                                    
x
+
TestPause/serial/Start (10.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-318000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-318000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.022842625s)

                                                
                                                
-- stdout --
	* [pause-318000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-318000" primary control-plane node in "pause-318000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-318000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-318000 -n pause-318000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-318000 -n pause-318000: exit status 7 (53.873333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-318000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-051000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-051000 --driver=qemu2 : exit status 80 (9.978094291s)

                                                
                                                
-- stdout --
	* [NoKubernetes-051000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-051000" primary control-plane node in "NoKubernetes-051000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-051000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-051000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-051000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-051000 -n NoKubernetes-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-051000 -n NoKubernetes-051000: exit status 7 (35.00775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-051000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-051000 --no-kubernetes --driver=qemu2 : exit status 80 (5.268954875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-051000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-051000
	* Restarting existing qemu2 VM for "NoKubernetes-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-051000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-051000 -n NoKubernetes-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-051000 -n NoKubernetes-051000: exit status 7 (73.445292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-051000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-051000 --no-kubernetes --driver=qemu2 : exit status 80 (5.256236042s)

                                                
                                                
-- stdout --
	* [NoKubernetes-051000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-051000
	* Restarting existing qemu2 VM for "NoKubernetes-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-051000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-051000 -n NoKubernetes-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-051000 -n NoKubernetes-051000: exit status 7 (63.024875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-051000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-051000 --driver=qemu2 : exit status 80 (5.264563583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-051000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-051000
	* Restarting existing qemu2 VM for "NoKubernetes-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-051000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-051000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-051000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-051000 -n NoKubernetes-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-051000 -n NoKubernetes-051000: exit status 7 (54.179375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-051000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.889575834s)

                                                
                                                
-- stdout --
	* [auto-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-395000" primary control-plane node in "auto-395000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-395000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:02:41.728642    5964 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:02:41.728799    5964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:02:41.728803    5964 out.go:358] Setting ErrFile to fd 2...
	I1204 13:02:41.728805    5964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:02:41.728938    5964 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:02:41.730076    5964 out.go:352] Setting JSON to false
	I1204 13:02:41.747979    5964 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5532,"bootTime":1733340629,"procs":583,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:02:41.748055    5964 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:02:41.754686    5964 out.go:177] * [auto-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:02:41.762645    5964 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:02:41.762723    5964 notify.go:220] Checking for updates...
	I1204 13:02:41.769564    5964 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:02:41.772616    5964 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:02:41.776494    5964 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:02:41.779620    5964 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:02:41.782621    5964 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:02:41.786056    5964 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:02:41.786136    5964 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:02:41.786188    5964 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:02:41.790551    5964 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:02:41.797587    5964 start.go:297] selected driver: qemu2
	I1204 13:02:41.797594    5964 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:02:41.797599    5964 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:02:41.799973    5964 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:02:41.802690    5964 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:02:41.806708    5964 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:02:41.806723    5964 cni.go:84] Creating CNI manager for ""
	I1204 13:02:41.806746    5964 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 13:02:41.806751    5964 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 13:02:41.806780    5964 start.go:340] cluster config:
	{Name:auto-395000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:02:41.811288    5964 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:02:41.819662    5964 out.go:177] * Starting "auto-395000" primary control-plane node in "auto-395000" cluster
	I1204 13:02:41.823647    5964 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:02:41.823673    5964 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:02:41.823680    5964 cache.go:56] Caching tarball of preloaded images
	I1204 13:02:41.823774    5964 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:02:41.823780    5964 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:02:41.823837    5964 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/auto-395000/config.json ...
	I1204 13:02:41.823847    5964 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/auto-395000/config.json: {Name:mk7e397d2288c17edd0614088b774ac53305476c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:02:41.824295    5964 start.go:360] acquireMachinesLock for auto-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:02:41.824338    5964 start.go:364] duration metric: took 38.208µs to acquireMachinesLock for "auto-395000"
	I1204 13:02:41.824350    5964 start.go:93] Provisioning new machine with config: &{Name:auto-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:02:41.824380    5964 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:02:41.828672    5964 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:02:41.843666    5964 start.go:159] libmachine.API.Create for "auto-395000" (driver="qemu2")
	I1204 13:02:41.843697    5964 client.go:168] LocalClient.Create starting
	I1204 13:02:41.843767    5964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:02:41.843805    5964 main.go:141] libmachine: Decoding PEM data...
	I1204 13:02:41.843815    5964 main.go:141] libmachine: Parsing certificate...
	I1204 13:02:41.843850    5964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:02:41.843879    5964 main.go:141] libmachine: Decoding PEM data...
	I1204 13:02:41.843886    5964 main.go:141] libmachine: Parsing certificate...
	I1204 13:02:41.844274    5964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:02:42.003392    5964 main.go:141] libmachine: Creating SSH key...
	I1204 13:02:42.204814    5964 main.go:141] libmachine: Creating Disk image...
	I1204 13:02:42.204824    5964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:02:42.205957    5964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2
	I1204 13:02:42.216411    5964 main.go:141] libmachine: STDOUT: 
	I1204 13:02:42.216435    5964 main.go:141] libmachine: STDERR: 
	I1204 13:02:42.216497    5964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2 +20000M
	I1204 13:02:42.225107    5964 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:02:42.225125    5964 main.go:141] libmachine: STDERR: 
	I1204 13:02:42.225146    5964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2
	I1204 13:02:42.225150    5964 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:02:42.225161    5964 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:02:42.225199    5964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:a4:10:e0:8a:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2
	I1204 13:02:42.226997    5964 main.go:141] libmachine: STDOUT: 
	I1204 13:02:42.227011    5964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:02:42.227032    5964 client.go:171] duration metric: took 383.325334ms to LocalClient.Create
	I1204 13:02:44.229169    5964 start.go:128] duration metric: took 2.404747125s to createHost
	I1204 13:02:44.229220    5964 start.go:83] releasing machines lock for "auto-395000", held for 2.404845458s
	W1204 13:02:44.229250    5964 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:02:44.237924    5964 out.go:177] * Deleting "auto-395000" in qemu2 ...
	W1204 13:02:44.257096    5964 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:02:44.257108    5964 start.go:729] Will try again in 5 seconds ...
	I1204 13:02:49.259467    5964 start.go:360] acquireMachinesLock for auto-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:02:49.260157    5964 start.go:364] duration metric: took 516.5µs to acquireMachinesLock for "auto-395000"
	I1204 13:02:49.260230    5964 start.go:93] Provisioning new machine with config: &{Name:auto-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:02:49.260529    5964 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:02:49.272121    5964 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:02:49.314948    5964 start.go:159] libmachine.API.Create for "auto-395000" (driver="qemu2")
	I1204 13:02:49.315014    5964 client.go:168] LocalClient.Create starting
	I1204 13:02:49.315143    5964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:02:49.315226    5964 main.go:141] libmachine: Decoding PEM data...
	I1204 13:02:49.315244    5964 main.go:141] libmachine: Parsing certificate...
	I1204 13:02:49.315310    5964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:02:49.315370    5964 main.go:141] libmachine: Decoding PEM data...
	I1204 13:02:49.315391    5964 main.go:141] libmachine: Parsing certificate...
	I1204 13:02:49.316331    5964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:02:49.482978    5964 main.go:141] libmachine: Creating SSH key...
	I1204 13:02:49.524857    5964 main.go:141] libmachine: Creating Disk image...
	I1204 13:02:49.524865    5964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:02:49.525105    5964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2
	I1204 13:02:49.534967    5964 main.go:141] libmachine: STDOUT: 
	I1204 13:02:49.534988    5964 main.go:141] libmachine: STDERR: 
	I1204 13:02:49.535068    5964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2 +20000M
	I1204 13:02:49.543732    5964 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:02:49.543749    5964 main.go:141] libmachine: STDERR: 
	I1204 13:02:49.543761    5964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2
	I1204 13:02:49.543766    5964 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:02:49.543784    5964 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:02:49.543808    5964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:d1:d4:0c:42:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/auto-395000/disk.qcow2
	I1204 13:02:49.545675    5964 main.go:141] libmachine: STDOUT: 
	I1204 13:02:49.545689    5964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:02:49.545710    5964 client.go:171] duration metric: took 230.688166ms to LocalClient.Create
	I1204 13:02:51.547860    5964 start.go:128] duration metric: took 2.287259s to createHost
	I1204 13:02:51.547898    5964 start.go:83] releasing machines lock for "auto-395000", held for 2.2876945s
	W1204 13:02:51.548068    5964 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:02:51.557502    5964 out.go:201] 
	W1204 13:02:51.566456    5964 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:02:51.566465    5964 out.go:270] * 
	* 
	W1204 13:02:51.567265    5964 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:02:51.577426    5964 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.816525791s)

                                                
                                                
-- stdout --
	* [calico-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-395000" primary control-plane node in "calico-395000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-395000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:02:53.971555    6077 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:02:53.971732    6077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:02:53.971735    6077 out.go:358] Setting ErrFile to fd 2...
	I1204 13:02:53.971738    6077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:02:53.971867    6077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:02:53.973057    6077 out.go:352] Setting JSON to false
	I1204 13:02:53.991437    6077 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5544,"bootTime":1733340629,"procs":579,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:02:53.991503    6077 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:02:53.998711    6077 out.go:177] * [calico-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:02:54.006625    6077 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:02:54.006659    6077 notify.go:220] Checking for updates...
	I1204 13:02:54.011914    6077 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:02:54.014653    6077 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:02:54.017733    6077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:02:54.020710    6077 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:02:54.023639    6077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:02:54.027078    6077 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:02:54.027147    6077 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:02:54.027196    6077 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:02:54.030659    6077 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:02:54.037663    6077 start.go:297] selected driver: qemu2
	I1204 13:02:54.037688    6077 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:02:54.037696    6077 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:02:54.040105    6077 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:02:54.044616    6077 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:02:54.047677    6077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:02:54.047696    6077 cni.go:84] Creating CNI manager for "calico"
	I1204 13:02:54.047700    6077 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1204 13:02:54.047730    6077 start.go:340] cluster config:
	{Name:calico-395000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:02:54.052329    6077 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:02:54.060640    6077 out.go:177] * Starting "calico-395000" primary control-plane node in "calico-395000" cluster
	I1204 13:02:54.064652    6077 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:02:54.064668    6077 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:02:54.064682    6077 cache.go:56] Caching tarball of preloaded images
	I1204 13:02:54.064762    6077 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:02:54.064774    6077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:02:54.064831    6077 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/calico-395000/config.json ...
	I1204 13:02:54.064846    6077 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/calico-395000/config.json: {Name:mk8621657eb58cedd3904bc4ccca23ba51a13806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:02:54.065085    6077 start.go:360] acquireMachinesLock for calico-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:02:54.065130    6077 start.go:364] duration metric: took 39.417µs to acquireMachinesLock for "calico-395000"
	I1204 13:02:54.065142    6077 start.go:93] Provisioning new machine with config: &{Name:calico-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:02:54.065166    6077 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:02:54.072691    6077 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:02:54.088190    6077 start.go:159] libmachine.API.Create for "calico-395000" (driver="qemu2")
	I1204 13:02:54.088224    6077 client.go:168] LocalClient.Create starting
	I1204 13:02:54.088297    6077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:02:54.088334    6077 main.go:141] libmachine: Decoding PEM data...
	I1204 13:02:54.088352    6077 main.go:141] libmachine: Parsing certificate...
	I1204 13:02:54.088389    6077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:02:54.088417    6077 main.go:141] libmachine: Decoding PEM data...
	I1204 13:02:54.088425    6077 main.go:141] libmachine: Parsing certificate...
	I1204 13:02:54.088839    6077 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:02:54.247838    6077 main.go:141] libmachine: Creating SSH key...
	I1204 13:02:54.340940    6077 main.go:141] libmachine: Creating Disk image...
	I1204 13:02:54.340947    6077 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:02:54.341214    6077 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2
	I1204 13:02:54.351357    6077 main.go:141] libmachine: STDOUT: 
	I1204 13:02:54.351379    6077 main.go:141] libmachine: STDERR: 
	I1204 13:02:54.351435    6077 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2 +20000M
	I1204 13:02:54.360188    6077 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:02:54.360211    6077 main.go:141] libmachine: STDERR: 
	I1204 13:02:54.360228    6077 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2
	I1204 13:02:54.360234    6077 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:02:54.360244    6077 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:02:54.360276    6077 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:19:ce:34:c3:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2
	I1204 13:02:54.362159    6077 main.go:141] libmachine: STDOUT: 
	I1204 13:02:54.362173    6077 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:02:54.362194    6077 client.go:171] duration metric: took 273.959166ms to LocalClient.Create
	I1204 13:02:56.364427    6077 start.go:128] duration metric: took 2.299197333s to createHost
	I1204 13:02:56.364505    6077 start.go:83] releasing machines lock for "calico-395000", held for 2.299337375s
	W1204 13:02:56.364605    6077 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:02:56.375781    6077 out.go:177] * Deleting "calico-395000" in qemu2 ...
	W1204 13:02:56.407164    6077 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:02:56.407194    6077 start.go:729] Will try again in 5 seconds ...
	I1204 13:03:01.409348    6077 start.go:360] acquireMachinesLock for calico-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:03:01.409561    6077 start.go:364] duration metric: took 178.542µs to acquireMachinesLock for "calico-395000"
	I1204 13:03:01.409580    6077 start.go:93] Provisioning new machine with config: &{Name:calico-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:03:01.409623    6077 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:03:01.418061    6077 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:03:01.433300    6077 start.go:159] libmachine.API.Create for "calico-395000" (driver="qemu2")
	I1204 13:03:01.433320    6077 client.go:168] LocalClient.Create starting
	I1204 13:03:01.433390    6077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:03:01.433437    6077 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:01.433448    6077 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:01.433481    6077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:03:01.433510    6077 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:01.433516    6077 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:01.433862    6077 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:03:01.592714    6077 main.go:141] libmachine: Creating SSH key...
	I1204 13:03:01.691810    6077 main.go:141] libmachine: Creating Disk image...
	I1204 13:03:01.691817    6077 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:03:01.692060    6077 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2
	I1204 13:03:01.702095    6077 main.go:141] libmachine: STDOUT: 
	I1204 13:03:01.702117    6077 main.go:141] libmachine: STDERR: 
	I1204 13:03:01.702178    6077 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2 +20000M
	I1204 13:03:01.710894    6077 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:03:01.710911    6077 main.go:141] libmachine: STDERR: 
	I1204 13:03:01.710923    6077 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2
	I1204 13:03:01.710927    6077 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:03:01.710936    6077 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:03:01.710962    6077 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ea:f3:53:7f:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/calico-395000/disk.qcow2
	I1204 13:03:01.712814    6077 main.go:141] libmachine: STDOUT: 
	I1204 13:03:01.712831    6077 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:03:01.712859    6077 client.go:171] duration metric: took 279.532292ms to LocalClient.Create
	I1204 13:03:03.714989    6077 start.go:128] duration metric: took 2.30531925s to createHost
	I1204 13:03:03.715023    6077 start.go:83] releasing machines lock for "calico-395000", held for 2.305427292s
	W1204 13:03:03.715237    6077 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:03.727679    6077 out.go:201] 
	W1204 13:03:03.731848    6077 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:03:03.731866    6077 out.go:270] * 
	* 
	W1204 13:03:03.733194    6077 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:03:03.744720    6077 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.96255375s)

                                                
                                                
-- stdout --
	* [custom-flannel-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-395000" primary control-plane node in "custom-flannel-395000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-395000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:03:06.279576    6194 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:03:06.279737    6194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:03:06.279740    6194 out.go:358] Setting ErrFile to fd 2...
	I1204 13:03:06.279743    6194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:03:06.279852    6194 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:03:06.281024    6194 out.go:352] Setting JSON to false
	I1204 13:03:06.299042    6194 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5557,"bootTime":1733340629,"procs":579,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:03:06.299124    6194 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:03:06.303020    6194 out.go:177] * [custom-flannel-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:03:06.310998    6194 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:03:06.311095    6194 notify.go:220] Checking for updates...
	I1204 13:03:06.318904    6194 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:03:06.321847    6194 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:03:06.325921    6194 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:03:06.328939    6194 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:03:06.331949    6194 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:03:06.335336    6194 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:03:06.335409    6194 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:03:06.335460    6194 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:03:06.339938    6194 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:03:06.350829    6194 start.go:297] selected driver: qemu2
	I1204 13:03:06.350837    6194 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:03:06.350846    6194 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:03:06.353496    6194 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:03:06.357948    6194 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:03:06.360899    6194 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:03:06.360913    6194 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1204 13:03:06.360921    6194 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1204 13:03:06.360960    6194 start.go:340] cluster config:
	{Name:custom-flannel-395000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:03:06.365593    6194 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:03:06.373781    6194 out.go:177] * Starting "custom-flannel-395000" primary control-plane node in "custom-flannel-395000" cluster
	I1204 13:03:06.377941    6194 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:03:06.377958    6194 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:03:06.377967    6194 cache.go:56] Caching tarball of preloaded images
	I1204 13:03:06.378056    6194 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:03:06.378062    6194 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:03:06.378127    6194 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/custom-flannel-395000/config.json ...
	I1204 13:03:06.378138    6194 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/custom-flannel-395000/config.json: {Name:mk9fb1075f1216c236d86e4055c52ffea30520ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:03:06.378612    6194 start.go:360] acquireMachinesLock for custom-flannel-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:03:06.378661    6194 start.go:364] duration metric: took 41.708µs to acquireMachinesLock for "custom-flannel-395000"
	I1204 13:03:06.378678    6194 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:03:06.378711    6194 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:03:06.382822    6194 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:03:06.399070    6194 start.go:159] libmachine.API.Create for "custom-flannel-395000" (driver="qemu2")
	I1204 13:03:06.399097    6194 client.go:168] LocalClient.Create starting
	I1204 13:03:06.399182    6194 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:03:06.399222    6194 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:06.399236    6194 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:06.399273    6194 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:03:06.399302    6194 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:06.399310    6194 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:06.399750    6194 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:03:06.557674    6194 main.go:141] libmachine: Creating SSH key...
	I1204 13:03:06.760257    6194 main.go:141] libmachine: Creating Disk image...
	I1204 13:03:06.760267    6194 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:03:06.760542    6194 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2
	I1204 13:03:06.771153    6194 main.go:141] libmachine: STDOUT: 
	I1204 13:03:06.771177    6194 main.go:141] libmachine: STDERR: 
	I1204 13:03:06.771266    6194 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2 +20000M
	I1204 13:03:06.780154    6194 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:03:06.780220    6194 main.go:141] libmachine: STDERR: 
	I1204 13:03:06.780241    6194 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2
	I1204 13:03:06.780246    6194 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:03:06.780261    6194 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:03:06.780287    6194 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:6f:d1:bd:e2:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2
	I1204 13:03:06.782135    6194 main.go:141] libmachine: STDOUT: 
	I1204 13:03:06.782147    6194 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:03:06.782166    6194 client.go:171] duration metric: took 383.058583ms to LocalClient.Create
	I1204 13:03:08.784402    6194 start.go:128] duration metric: took 2.405629708s to createHost
	I1204 13:03:08.784498    6194 start.go:83] releasing machines lock for "custom-flannel-395000", held for 2.405797958s
	W1204 13:03:08.784662    6194 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:08.797151    6194 out.go:177] * Deleting "custom-flannel-395000" in qemu2 ...
	W1204 13:03:08.826748    6194 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:08.826789    6194 start.go:729] Will try again in 5 seconds ...
	I1204 13:03:13.827648    6194 start.go:360] acquireMachinesLock for custom-flannel-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:03:13.828070    6194 start.go:364] duration metric: took 347.792µs to acquireMachinesLock for "custom-flannel-395000"
	I1204 13:03:13.828194    6194 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:03:13.828444    6194 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:03:13.838146    6194 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:03:13.880229    6194 start.go:159] libmachine.API.Create for "custom-flannel-395000" (driver="qemu2")
	I1204 13:03:13.880287    6194 client.go:168] LocalClient.Create starting
	I1204 13:03:13.880452    6194 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:03:13.880532    6194 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:13.880552    6194 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:13.880620    6194 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:03:13.880680    6194 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:13.880699    6194 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:13.881269    6194 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:03:14.051124    6194 main.go:141] libmachine: Creating SSH key...
	I1204 13:03:14.141789    6194 main.go:141] libmachine: Creating Disk image...
	I1204 13:03:14.141798    6194 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:03:14.142039    6194 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2
	I1204 13:03:14.152367    6194 main.go:141] libmachine: STDOUT: 
	I1204 13:03:14.152402    6194 main.go:141] libmachine: STDERR: 
	I1204 13:03:14.152472    6194 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2 +20000M
	I1204 13:03:14.162086    6194 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:03:14.162109    6194 main.go:141] libmachine: STDERR: 
	I1204 13:03:14.162124    6194 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2
	I1204 13:03:14.162130    6194 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:03:14.162139    6194 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:03:14.162173    6194 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:d7:16:d2:58:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/custom-flannel-395000/disk.qcow2
	I1204 13:03:14.164260    6194 main.go:141] libmachine: STDOUT: 
	I1204 13:03:14.164274    6194 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:03:14.164298    6194 client.go:171] duration metric: took 283.984083ms to LocalClient.Create
	I1204 13:03:16.166493    6194 start.go:128] duration metric: took 2.337975125s to createHost
	I1204 13:03:16.166549    6194 start.go:83] releasing machines lock for "custom-flannel-395000", held for 2.338434s
	W1204 13:03:16.166846    6194 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:16.176333    6194 out.go:201] 
	W1204 13:03:16.185506    6194 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:03:16.185532    6194 out.go:270] * 
	* 
	W1204 13:03:16.186848    6194 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:03:16.197292    6194 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.854378875s)

                                                
                                                
-- stdout --
	* [false-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-395000" primary control-plane node in "false-395000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-395000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:03:18.807165    6319 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:03:18.807315    6319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:03:18.807318    6319 out.go:358] Setting ErrFile to fd 2...
	I1204 13:03:18.807320    6319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:03:18.807464    6319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:03:18.808677    6319 out.go:352] Setting JSON to false
	I1204 13:03:18.827504    6319 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5569,"bootTime":1733340629,"procs":582,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:03:18.827580    6319 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:03:18.831667    6319 out.go:177] * [false-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:03:18.841690    6319 notify.go:220] Checking for updates...
	I1204 13:03:18.845861    6319 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:03:18.849789    6319 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:03:18.855798    6319 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:03:18.859549    6319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:03:18.862734    6319 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:03:18.865743    6319 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:03:18.869070    6319 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:03:18.869144    6319 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:03:18.869212    6319 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:03:18.872717    6319 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:03:18.879791    6319 start.go:297] selected driver: qemu2
	I1204 13:03:18.879799    6319 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:03:18.879815    6319 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:03:18.882192    6319 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:03:18.886703    6319 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:03:18.889837    6319 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:03:18.889862    6319 cni.go:84] Creating CNI manager for "false"
	I1204 13:03:18.889897    6319 start.go:340] cluster config:
	{Name:false-395000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:03:18.894429    6319 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:03:18.902775    6319 out.go:177] * Starting "false-395000" primary control-plane node in "false-395000" cluster
	I1204 13:03:18.906725    6319 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:03:18.906742    6319 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:03:18.906753    6319 cache.go:56] Caching tarball of preloaded images
	I1204 13:03:18.906839    6319 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:03:18.906845    6319 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:03:18.906909    6319 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/false-395000/config.json ...
	I1204 13:03:18.906921    6319 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/false-395000/config.json: {Name:mke30a3d9be27f562049be4dd774136b18586fa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:03:18.907203    6319 start.go:360] acquireMachinesLock for false-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:03:18.907250    6319 start.go:364] duration metric: took 41.084µs to acquireMachinesLock for "false-395000"
	I1204 13:03:18.907263    6319 start.go:93] Provisioning new machine with config: &{Name:false-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:03:18.907293    6319 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:03:18.915771    6319 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:03:18.932627    6319 start.go:159] libmachine.API.Create for "false-395000" (driver="qemu2")
	I1204 13:03:18.932660    6319 client.go:168] LocalClient.Create starting
	I1204 13:03:18.932732    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:03:18.932774    6319 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:18.932790    6319 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:18.932829    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:03:18.932859    6319 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:18.932866    6319 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:18.933303    6319 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:03:19.091453    6319 main.go:141] libmachine: Creating SSH key...
	I1204 13:03:19.155719    6319 main.go:141] libmachine: Creating Disk image...
	I1204 13:03:19.155727    6319 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:03:19.155957    6319 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2
	I1204 13:03:19.165818    6319 main.go:141] libmachine: STDOUT: 
	I1204 13:03:19.165844    6319 main.go:141] libmachine: STDERR: 
	I1204 13:03:19.165905    6319 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2 +20000M
	I1204 13:03:19.174857    6319 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:03:19.174872    6319 main.go:141] libmachine: STDERR: 
	I1204 13:03:19.174890    6319 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2
	I1204 13:03:19.174895    6319 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:03:19.174920    6319 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:03:19.174957    6319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:9f:da:ad:de:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2
	I1204 13:03:19.176840    6319 main.go:141] libmachine: STDOUT: 
	I1204 13:03:19.176860    6319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:03:19.176881    6319 client.go:171] duration metric: took 244.211167ms to LocalClient.Create
	I1204 13:03:21.179112    6319 start.go:128] duration metric: took 2.2717645s to createHost
	I1204 13:03:21.179184    6319 start.go:83] releasing machines lock for "false-395000", held for 2.271896042s
	W1204 13:03:21.179278    6319 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:21.189346    6319 out.go:177] * Deleting "false-395000" in qemu2 ...
	W1204 13:03:21.222536    6319 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:21.222567    6319 start.go:729] Will try again in 5 seconds ...
	I1204 13:03:26.224796    6319 start.go:360] acquireMachinesLock for false-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:03:26.225130    6319 start.go:364] duration metric: took 270µs to acquireMachinesLock for "false-395000"
	I1204 13:03:26.225203    6319 start.go:93] Provisioning new machine with config: &{Name:false-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:03:26.225324    6319 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:03:26.241750    6319 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:03:26.276244    6319 start.go:159] libmachine.API.Create for "false-395000" (driver="qemu2")
	I1204 13:03:26.276324    6319 client.go:168] LocalClient.Create starting
	I1204 13:03:26.276510    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:03:26.276601    6319 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:26.276619    6319 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:26.276686    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:03:26.276741    6319 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:26.276756    6319 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:26.277584    6319 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:03:26.442776    6319 main.go:141] libmachine: Creating SSH key...
	I1204 13:03:26.562063    6319 main.go:141] libmachine: Creating Disk image...
	I1204 13:03:26.562075    6319 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:03:26.562324    6319 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2
	I1204 13:03:26.572514    6319 main.go:141] libmachine: STDOUT: 
	I1204 13:03:26.572530    6319 main.go:141] libmachine: STDERR: 
	I1204 13:03:26.572596    6319 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2 +20000M
	I1204 13:03:26.581204    6319 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:03:26.581223    6319 main.go:141] libmachine: STDERR: 
	I1204 13:03:26.581235    6319 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2
	I1204 13:03:26.581240    6319 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:03:26.581251    6319 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:03:26.581285    6319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:5d:44:c8:70:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/false-395000/disk.qcow2
	I1204 13:03:26.583100    6319 main.go:141] libmachine: STDOUT: 
	I1204 13:03:26.583125    6319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:03:26.583139    6319 client.go:171] duration metric: took 306.796ms to LocalClient.Create
	I1204 13:03:28.585372    6319 start.go:128] duration metric: took 2.359975084s to createHost
	I1204 13:03:28.585454    6319 start.go:83] releasing machines lock for "false-395000", held for 2.360280208s
	W1204 13:03:28.585851    6319 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:28.599572    6319 out.go:201] 
	W1204 13:03:28.603767    6319 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:03:28.603794    6319 out.go:270] * 
	* 
	W1204 13:03:28.606589    6319 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:03:28.615645    6319 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.946900292s)

                                                
                                                
-- stdout --
	* [kindnet-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-395000" primary control-plane node in "kindnet-395000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-395000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:03:30.991167    6428 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:03:30.991337    6428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:03:30.991341    6428 out.go:358] Setting ErrFile to fd 2...
	I1204 13:03:30.991343    6428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:03:30.991483    6428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:03:30.992682    6428 out.go:352] Setting JSON to false
	I1204 13:03:31.010799    6428 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5581,"bootTime":1733340629,"procs":578,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:03:31.010882    6428 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:03:31.016228    6428 out.go:177] * [kindnet-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:03:31.024127    6428 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:03:31.024219    6428 notify.go:220] Checking for updates...
	I1204 13:03:31.030986    6428 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:03:31.034103    6428 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:03:31.037879    6428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:03:31.041119    6428 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:03:31.044116    6428 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:03:31.047543    6428 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:03:31.047618    6428 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:03:31.047664    6428 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:03:31.052103    6428 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:03:31.059183    6428 start.go:297] selected driver: qemu2
	I1204 13:03:31.059190    6428 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:03:31.059201    6428 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:03:31.061663    6428 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:03:31.066063    6428 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:03:31.069196    6428 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:03:31.069210    6428 cni.go:84] Creating CNI manager for "kindnet"
	I1204 13:03:31.069216    6428 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 13:03:31.069242    6428 start.go:340] cluster config:
	{Name:kindnet-395000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:03:31.073767    6428 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:03:31.082147    6428 out.go:177] * Starting "kindnet-395000" primary control-plane node in "kindnet-395000" cluster
	I1204 13:03:31.086067    6428 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:03:31.086084    6428 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:03:31.086095    6428 cache.go:56] Caching tarball of preloaded images
	I1204 13:03:31.086176    6428 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:03:31.086188    6428 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:03:31.086257    6428 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/kindnet-395000/config.json ...
	I1204 13:03:31.086267    6428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/kindnet-395000/config.json: {Name:mkf3e88086f1d1ab1527b7af14074dfdfc4b810e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:03:31.086704    6428 start.go:360] acquireMachinesLock for kindnet-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:03:31.086751    6428 start.go:364] duration metric: took 41.5µs to acquireMachinesLock for "kindnet-395000"
	I1204 13:03:31.086764    6428 start.go:93] Provisioning new machine with config: &{Name:kindnet-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:03:31.086789    6428 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:03:31.095124    6428 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:03:31.110636    6428 start.go:159] libmachine.API.Create for "kindnet-395000" (driver="qemu2")
	I1204 13:03:31.110661    6428 client.go:168] LocalClient.Create starting
	I1204 13:03:31.110732    6428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:03:31.110774    6428 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:31.110783    6428 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:31.110825    6428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:03:31.110855    6428 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:31.110861    6428 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:31.111353    6428 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:03:31.277871    6428 main.go:141] libmachine: Creating SSH key...
	I1204 13:03:31.338656    6428 main.go:141] libmachine: Creating Disk image...
	I1204 13:03:31.338665    6428 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:03:31.338906    6428 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2
	I1204 13:03:31.349057    6428 main.go:141] libmachine: STDOUT: 
	I1204 13:03:31.349083    6428 main.go:141] libmachine: STDERR: 
	I1204 13:03:31.349136    6428 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2 +20000M
	I1204 13:03:31.357868    6428 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:03:31.357900    6428 main.go:141] libmachine: STDERR: 
	I1204 13:03:31.357925    6428 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2
	I1204 13:03:31.357929    6428 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:03:31.357941    6428 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:03:31.357973    6428 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:e2:19:93:97:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2
	I1204 13:03:31.359850    6428 main.go:141] libmachine: STDOUT: 
	I1204 13:03:31.359866    6428 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:03:31.359886    6428 client.go:171] duration metric: took 249.21475ms to LocalClient.Create
	I1204 13:03:33.362108    6428 start.go:128] duration metric: took 2.275263667s to createHost
	I1204 13:03:33.362185    6428 start.go:83] releasing machines lock for "kindnet-395000", held for 2.275396541s
	W1204 13:03:33.362315    6428 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:33.376477    6428 out.go:177] * Deleting "kindnet-395000" in qemu2 ...
	W1204 13:03:33.406451    6428 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:33.406483    6428 start.go:729] Will try again in 5 seconds ...
	I1204 13:03:38.408918    6428 start.go:360] acquireMachinesLock for kindnet-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:03:38.409505    6428 start.go:364] duration metric: took 485.875µs to acquireMachinesLock for "kindnet-395000"
	I1204 13:03:38.409646    6428 start.go:93] Provisioning new machine with config: &{Name:kindnet-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:03:38.409961    6428 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:03:38.421669    6428 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:03:38.468487    6428 start.go:159] libmachine.API.Create for "kindnet-395000" (driver="qemu2")
	I1204 13:03:38.468543    6428 client.go:168] LocalClient.Create starting
	I1204 13:03:38.468683    6428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:03:38.468767    6428 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:38.468784    6428 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:38.468861    6428 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:03:38.468919    6428 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:38.468951    6428 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:38.469793    6428 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:03:38.637621    6428 main.go:141] libmachine: Creating SSH key...
	I1204 13:03:38.839683    6428 main.go:141] libmachine: Creating Disk image...
	I1204 13:03:38.839693    6428 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:03:38.839964    6428 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2
	I1204 13:03:38.850385    6428 main.go:141] libmachine: STDOUT: 
	I1204 13:03:38.850405    6428 main.go:141] libmachine: STDERR: 
	I1204 13:03:38.850472    6428 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2 +20000M
	I1204 13:03:38.859363    6428 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:03:38.859380    6428 main.go:141] libmachine: STDERR: 
	I1204 13:03:38.859396    6428 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2
	I1204 13:03:38.859401    6428 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:03:38.859410    6428 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:03:38.859450    6428 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:67:e5:64:18:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kindnet-395000/disk.qcow2
	I1204 13:03:38.861470    6428 main.go:141] libmachine: STDOUT: 
	I1204 13:03:38.861485    6428 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:03:38.861498    6428 client.go:171] duration metric: took 392.939584ms to LocalClient.Create
	I1204 13:03:40.862448    6428 start.go:128] duration metric: took 2.452437s to createHost
	I1204 13:03:40.862474    6428 start.go:83] releasing machines lock for "kindnet-395000", held for 2.452919542s
	W1204 13:03:40.862630    6428 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:40.876211    6428 out.go:201] 
	W1204 13:03:40.880951    6428 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:03:40.880960    6428 out.go:270] * 
	* 
	W1204 13:03:40.881465    6428 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:03:40.892885    6428 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.934060459s)

                                                
                                                
-- stdout --
	* [flannel-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-395000" primary control-plane node in "flannel-395000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-395000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:03:43.341673    6548 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:03:43.341843    6548 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:03:43.341846    6548 out.go:358] Setting ErrFile to fd 2...
	I1204 13:03:43.341849    6548 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:03:43.341979    6548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:03:43.343149    6548 out.go:352] Setting JSON to false
	I1204 13:03:43.360949    6548 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5594,"bootTime":1733340629,"procs":578,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:03:43.361018    6548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:03:43.368575    6548 out.go:177] * [flannel-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:03:43.376529    6548 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:03:43.376579    6548 notify.go:220] Checking for updates...
	I1204 13:03:43.384453    6548 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:03:43.387470    6548 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:03:43.391513    6548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:03:43.394449    6548 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:03:43.397470    6548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:03:43.400839    6548 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:03:43.400916    6548 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:03:43.400968    6548 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:03:43.404454    6548 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:03:43.411519    6548 start.go:297] selected driver: qemu2
	I1204 13:03:43.411527    6548 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:03:43.411540    6548 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:03:43.414090    6548 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:03:43.417483    6548 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:03:43.421562    6548 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:03:43.421580    6548 cni.go:84] Creating CNI manager for "flannel"
	I1204 13:03:43.421584    6548 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1204 13:03:43.421627    6548 start.go:340] cluster config:
	{Name:flannel-395000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:03:43.426131    6548 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:03:43.431419    6548 out.go:177] * Starting "flannel-395000" primary control-plane node in "flannel-395000" cluster
	I1204 13:03:43.435479    6548 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:03:43.435495    6548 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:03:43.435504    6548 cache.go:56] Caching tarball of preloaded images
	I1204 13:03:43.435588    6548 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:03:43.435593    6548 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:03:43.435661    6548 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/flannel-395000/config.json ...
	I1204 13:03:43.435671    6548 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/flannel-395000/config.json: {Name:mkf00cfed2cbbc3528fe86668d2faac5a9073be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:03:43.436118    6548 start.go:360] acquireMachinesLock for flannel-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:03:43.436163    6548 start.go:364] duration metric: took 39.625µs to acquireMachinesLock for "flannel-395000"
	I1204 13:03:43.436176    6548 start.go:93] Provisioning new machine with config: &{Name:flannel-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:03:43.436210    6548 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:03:43.443497    6548 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:03:43.459349    6548 start.go:159] libmachine.API.Create for "flannel-395000" (driver="qemu2")
	I1204 13:03:43.459383    6548 client.go:168] LocalClient.Create starting
	I1204 13:03:43.459462    6548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:03:43.459507    6548 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:43.459521    6548 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:43.459553    6548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:03:43.459585    6548 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:43.459594    6548 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:43.460098    6548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:03:43.618917    6548 main.go:141] libmachine: Creating SSH key...
	I1204 13:03:43.699790    6548 main.go:141] libmachine: Creating Disk image...
	I1204 13:03:43.699800    6548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:03:43.700050    6548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2
	I1204 13:03:43.710262    6548 main.go:141] libmachine: STDOUT: 
	I1204 13:03:43.710286    6548 main.go:141] libmachine: STDERR: 
	I1204 13:03:43.710349    6548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2 +20000M
	I1204 13:03:43.719079    6548 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:03:43.719094    6548 main.go:141] libmachine: STDERR: 
	I1204 13:03:43.719120    6548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2
	I1204 13:03:43.719128    6548 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:03:43.719143    6548 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:03:43.719173    6548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:62:b4:12:ee:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2
	I1204 13:03:43.720990    6548 main.go:141] libmachine: STDOUT: 
	I1204 13:03:43.721004    6548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:03:43.721023    6548 client.go:171] duration metric: took 261.632375ms to LocalClient.Create
	I1204 13:03:45.723264    6548 start.go:128] duration metric: took 2.286994375s to createHost
	I1204 13:03:45.723358    6548 start.go:83] releasing machines lock for "flannel-395000", held for 2.287157666s
	W1204 13:03:45.723491    6548 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:45.739840    6548 out.go:177] * Deleting "flannel-395000" in qemu2 ...
	W1204 13:03:45.766298    6548 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:45.766335    6548 start.go:729] Will try again in 5 seconds ...
	I1204 13:03:50.768694    6548 start.go:360] acquireMachinesLock for flannel-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:03:50.769414    6548 start.go:364] duration metric: took 564.916µs to acquireMachinesLock for "flannel-395000"
	I1204 13:03:50.769481    6548 start.go:93] Provisioning new machine with config: &{Name:flannel-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:03:50.769773    6548 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:03:50.780498    6548 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:03:50.828211    6548 start.go:159] libmachine.API.Create for "flannel-395000" (driver="qemu2")
	I1204 13:03:50.828272    6548 client.go:168] LocalClient.Create starting
	I1204 13:03:50.828439    6548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:03:50.828517    6548 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:50.828535    6548 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:50.828596    6548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:03:50.828658    6548 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:50.828669    6548 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:50.829398    6548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:03:50.994887    6548 main.go:141] libmachine: Creating SSH key...
	I1204 13:03:51.181440    6548 main.go:141] libmachine: Creating Disk image...
	I1204 13:03:51.181452    6548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:03:51.182167    6548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2
	I1204 13:03:51.192533    6548 main.go:141] libmachine: STDOUT: 
	I1204 13:03:51.192558    6548 main.go:141] libmachine: STDERR: 
	I1204 13:03:51.192629    6548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2 +20000M
	I1204 13:03:51.201329    6548 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:03:51.201346    6548 main.go:141] libmachine: STDERR: 
	I1204 13:03:51.201357    6548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2
	I1204 13:03:51.201377    6548 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:03:51.201386    6548 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:03:51.201414    6548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:cf:a7:09:83:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/flannel-395000/disk.qcow2
	I1204 13:03:51.203298    6548 main.go:141] libmachine: STDOUT: 
	I1204 13:03:51.203312    6548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:03:51.203325    6548 client.go:171] duration metric: took 375.039125ms to LocalClient.Create
	I1204 13:03:53.205485    6548 start.go:128] duration metric: took 2.435657375s to createHost
	I1204 13:03:53.205564    6548 start.go:83] releasing machines lock for "flannel-395000", held for 2.436096834s
	W1204 13:03:53.205874    6548 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:53.214267    6548 out.go:201] 
	W1204 13:03:53.220382    6548 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:03:53.220437    6548 out.go:270] * 
	* 
	W1204 13:03:53.221889    6548 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:03:53.231231    6548 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.978478333s)

                                                
                                                
-- stdout --
	* [enable-default-cni-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-395000" primary control-plane node in "enable-default-cni-395000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-395000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:03:55.793147    6667 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:03:55.793308    6667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:03:55.793312    6667 out.go:358] Setting ErrFile to fd 2...
	I1204 13:03:55.793314    6667 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:03:55.793435    6667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:03:55.794541    6667 out.go:352] Setting JSON to false
	I1204 13:03:55.812282    6667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5606,"bootTime":1733340629,"procs":578,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:03:55.812355    6667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:03:55.817028    6667 out.go:177] * [enable-default-cni-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:03:55.824970    6667 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:03:55.825047    6667 notify.go:220] Checking for updates...
	I1204 13:03:55.832953    6667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:03:55.835999    6667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:03:55.839936    6667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:03:55.843015    6667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:03:55.845991    6667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:03:55.849368    6667 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:03:55.849453    6667 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:03:55.849516    6667 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:03:55.853914    6667 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:03:55.860893    6667 start.go:297] selected driver: qemu2
	I1204 13:03:55.860900    6667 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:03:55.860908    6667 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:03:55.863363    6667 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:03:55.865925    6667 out.go:177] * Automatically selected the socket_vmnet network
	E1204 13:03:55.870040    6667 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1204 13:03:55.870053    6667 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:03:55.870068    6667 cni.go:84] Creating CNI manager for "bridge"
	I1204 13:03:55.870070    6667 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 13:03:55.870107    6667 start.go:340] cluster config:
	{Name:enable-default-cni-395000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:03:55.874452    6667 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:03:55.882965    6667 out.go:177] * Starting "enable-default-cni-395000" primary control-plane node in "enable-default-cni-395000" cluster
	I1204 13:03:55.885886    6667 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:03:55.885898    6667 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:03:55.885908    6667 cache.go:56] Caching tarball of preloaded images
	I1204 13:03:55.885972    6667 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:03:55.885977    6667 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:03:55.886027    6667 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/enable-default-cni-395000/config.json ...
	I1204 13:03:55.886037    6667 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/enable-default-cni-395000/config.json: {Name:mk4eb15dcd3786527ed8818bc9eec05a1e231736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:03:55.886474    6667 start.go:360] acquireMachinesLock for enable-default-cni-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:03:55.886518    6667 start.go:364] duration metric: took 36.542µs to acquireMachinesLock for "enable-default-cni-395000"
	I1204 13:03:55.886530    6667 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:03:55.886552    6667 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:03:55.890997    6667 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:03:55.905875    6667 start.go:159] libmachine.API.Create for "enable-default-cni-395000" (driver="qemu2")
	I1204 13:03:55.905900    6667 client.go:168] LocalClient.Create starting
	I1204 13:03:55.905962    6667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:03:55.906000    6667 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:55.906009    6667 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:55.906046    6667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:03:55.906075    6667 main.go:141] libmachine: Decoding PEM data...
	I1204 13:03:55.906082    6667 main.go:141] libmachine: Parsing certificate...
	I1204 13:03:55.906442    6667 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:03:56.065298    6667 main.go:141] libmachine: Creating SSH key...
	I1204 13:03:56.184907    6667 main.go:141] libmachine: Creating Disk image...
	I1204 13:03:56.184924    6667 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:03:56.185349    6667 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2
	I1204 13:03:56.195301    6667 main.go:141] libmachine: STDOUT: 
	I1204 13:03:56.195322    6667 main.go:141] libmachine: STDERR: 
	I1204 13:03:56.195386    6667 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2 +20000M
	I1204 13:03:56.204113    6667 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:03:56.204139    6667 main.go:141] libmachine: STDERR: 
	I1204 13:03:56.204159    6667 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2
	I1204 13:03:56.204165    6667 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:03:56.204177    6667 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:03:56.204202    6667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:76:5d:e5:a7:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2
	I1204 13:03:56.206114    6667 main.go:141] libmachine: STDOUT: 
	I1204 13:03:56.206163    6667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:03:56.206186    6667 client.go:171] duration metric: took 300.276792ms to LocalClient.Create
	I1204 13:03:58.208426    6667 start.go:128] duration metric: took 2.321816625s to createHost
	I1204 13:03:58.208540    6667 start.go:83] releasing machines lock for "enable-default-cni-395000", held for 2.321984583s
	W1204 13:03:58.208625    6667 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:58.220029    6667 out.go:177] * Deleting "enable-default-cni-395000" in qemu2 ...
	W1204 13:03:58.254081    6667 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:03:58.254112    6667 start.go:729] Will try again in 5 seconds ...
	I1204 13:04:03.256344    6667 start.go:360] acquireMachinesLock for enable-default-cni-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:04:03.257034    6667 start.go:364] duration metric: took 588.708µs to acquireMachinesLock for "enable-default-cni-395000"
	I1204 13:04:03.257176    6667 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:04:03.257466    6667 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:04:03.266948    6667 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:04:03.316385    6667 start.go:159] libmachine.API.Create for "enable-default-cni-395000" (driver="qemu2")
	I1204 13:04:03.316437    6667 client.go:168] LocalClient.Create starting
	I1204 13:04:03.316588    6667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:04:03.316677    6667 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:03.316696    6667 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:03.316766    6667 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:04:03.316827    6667 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:03.316841    6667 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:03.317516    6667 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:04:03.487155    6667 main.go:141] libmachine: Creating SSH key...
	I1204 13:04:03.668028    6667 main.go:141] libmachine: Creating Disk image...
	I1204 13:04:03.668039    6667 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:04:03.668345    6667 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2
	I1204 13:04:03.679438    6667 main.go:141] libmachine: STDOUT: 
	I1204 13:04:03.679468    6667 main.go:141] libmachine: STDERR: 
	I1204 13:04:03.679545    6667 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2 +20000M
	I1204 13:04:03.688640    6667 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:04:03.688657    6667 main.go:141] libmachine: STDERR: 
	I1204 13:04:03.688678    6667 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2
	I1204 13:04:03.688684    6667 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:04:03.688694    6667 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:04:03.688723    6667 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:35:9b:86:c5:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/enable-default-cni-395000/disk.qcow2
	I1204 13:04:03.690644    6667 main.go:141] libmachine: STDOUT: 
	I1204 13:04:03.690668    6667 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:04:03.690687    6667 client.go:171] duration metric: took 374.238084ms to LocalClient.Create
	I1204 13:04:05.692900    6667 start.go:128] duration metric: took 2.435340958s to createHost
	I1204 13:04:05.693004    6667 start.go:83] releasing machines lock for "enable-default-cni-395000", held for 2.435916834s
	W1204 13:04:05.693361    6667 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:05.705042    6667 out.go:201] 
	W1204 13:04:05.708981    6667 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:04:05.709072    6667 out.go:270] * 
	* 
	W1204 13:04:05.711699    6667 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:04:05.725942    6667 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.893361584s)

                                                
                                                
-- stdout --
	* [bridge-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-395000" primary control-plane node in "bridge-395000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-395000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:04:08.118148    6778 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:04:08.118282    6778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:08.118285    6778 out.go:358] Setting ErrFile to fd 2...
	I1204 13:04:08.118288    6778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:08.118408    6778 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:04:08.119606    6778 out.go:352] Setting JSON to false
	I1204 13:04:08.137473    6778 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5619,"bootTime":1733340629,"procs":578,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:04:08.137565    6778 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:04:08.144071    6778 out.go:177] * [bridge-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:04:08.152042    6778 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:04:08.152112    6778 notify.go:220] Checking for updates...
	I1204 13:04:08.159015    6778 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:04:08.162016    6778 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:04:08.166025    6778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:04:08.169040    6778 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:04:08.171994    6778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:04:08.175346    6778 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:04:08.175414    6778 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:04:08.175464    6778 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:04:08.178956    6778 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:04:08.186017    6778 start.go:297] selected driver: qemu2
	I1204 13:04:08.186024    6778 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:04:08.186036    6778 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:04:08.188394    6778 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:04:08.192984    6778 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:04:08.196071    6778 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:04:08.196086    6778 cni.go:84] Creating CNI manager for "bridge"
	I1204 13:04:08.196089    6778 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 13:04:08.196118    6778 start.go:340] cluster config:
	{Name:bridge-395000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:04:08.200511    6778 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:08.209037    6778 out.go:177] * Starting "bridge-395000" primary control-plane node in "bridge-395000" cluster
	I1204 13:04:08.212929    6778 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:04:08.212944    6778 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:04:08.212955    6778 cache.go:56] Caching tarball of preloaded images
	I1204 13:04:08.213029    6778 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:04:08.213034    6778 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:04:08.213093    6778 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/bridge-395000/config.json ...
	I1204 13:04:08.213104    6778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/bridge-395000/config.json: {Name:mk1832879013a02f02c20e825c3d7c0e54985deb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:04:08.213348    6778 start.go:360] acquireMachinesLock for bridge-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:04:08.213393    6778 start.go:364] duration metric: took 38.959µs to acquireMachinesLock for "bridge-395000"
	I1204 13:04:08.213405    6778 start.go:93] Provisioning new machine with config: &{Name:bridge-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:04:08.213434    6778 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:04:08.222056    6778 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:04:08.238186    6778 start.go:159] libmachine.API.Create for "bridge-395000" (driver="qemu2")
	I1204 13:04:08.238217    6778 client.go:168] LocalClient.Create starting
	I1204 13:04:08.238293    6778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:04:08.238329    6778 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:08.238343    6778 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:08.238381    6778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:04:08.238410    6778 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:08.238422    6778 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:08.238883    6778 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:04:08.398113    6778 main.go:141] libmachine: Creating SSH key...
	I1204 13:04:08.511670    6778 main.go:141] libmachine: Creating Disk image...
	I1204 13:04:08.511678    6778 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:04:08.512917    6778 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2
	I1204 13:04:08.522981    6778 main.go:141] libmachine: STDOUT: 
	I1204 13:04:08.522996    6778 main.go:141] libmachine: STDERR: 
	I1204 13:04:08.523059    6778 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2 +20000M
	I1204 13:04:08.531813    6778 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:04:08.531828    6778 main.go:141] libmachine: STDERR: 
	I1204 13:04:08.531848    6778 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2
	I1204 13:04:08.531853    6778 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:04:08.531871    6778 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:04:08.531902    6778 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:56:b5:23:cf:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2
	I1204 13:04:08.533825    6778 main.go:141] libmachine: STDOUT: 
	I1204 13:04:08.533841    6778 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:04:08.533862    6778 client.go:171] duration metric: took 295.634625ms to LocalClient.Create
	I1204 13:04:10.536151    6778 start.go:128] duration metric: took 2.322662625s to createHost
	I1204 13:04:10.536223    6778 start.go:83] releasing machines lock for "bridge-395000", held for 2.322794334s
	W1204 13:04:10.536290    6778 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:10.548092    6778 out.go:177] * Deleting "bridge-395000" in qemu2 ...
	W1204 13:04:10.574450    6778 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:10.574546    6778 start.go:729] Will try again in 5 seconds ...
	I1204 13:04:15.576791    6778 start.go:360] acquireMachinesLock for bridge-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:04:15.577119    6778 start.go:364] duration metric: took 279µs to acquireMachinesLock for "bridge-395000"
	I1204 13:04:15.577160    6778 start.go:93] Provisioning new machine with config: &{Name:bridge-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:04:15.577355    6778 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:04:15.586865    6778 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:04:15.623711    6778 start.go:159] libmachine.API.Create for "bridge-395000" (driver="qemu2")
	I1204 13:04:15.623771    6778 client.go:168] LocalClient.Create starting
	I1204 13:04:15.623911    6778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:04:15.623985    6778 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:15.624002    6778 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:15.624063    6778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:04:15.624114    6778 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:15.624128    6778 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:15.624889    6778 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:04:15.793045    6778 main.go:141] libmachine: Creating SSH key...
	I1204 13:04:15.918312    6778 main.go:141] libmachine: Creating Disk image...
	I1204 13:04:15.918323    6778 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:04:15.918555    6778 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2
	I1204 13:04:15.928608    6778 main.go:141] libmachine: STDOUT: 
	I1204 13:04:15.928622    6778 main.go:141] libmachine: STDERR: 
	I1204 13:04:15.928669    6778 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2 +20000M
	I1204 13:04:15.937128    6778 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:04:15.937147    6778 main.go:141] libmachine: STDERR: 
	I1204 13:04:15.937161    6778 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2
	I1204 13:04:15.937164    6778 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:04:15.937176    6778 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:04:15.937216    6778 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:89:f4:84:2d:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/bridge-395000/disk.qcow2
	I1204 13:04:15.939148    6778 main.go:141] libmachine: STDOUT: 
	I1204 13:04:15.939161    6778 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:04:15.939174    6778 client.go:171] duration metric: took 315.394ms to LocalClient.Create
	I1204 13:04:17.941298    6778 start.go:128] duration metric: took 2.363896792s to createHost
	I1204 13:04:17.941336    6778 start.go:83] releasing machines lock for "bridge-395000", held for 2.364171875s
	W1204 13:04:17.941530    6778 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:17.949949    6778 out.go:201] 
	W1204 13:04:17.957963    6778 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:04:17.957982    6778 out.go:270] * 
	* 
	W1204 13:04:17.958802    6778 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:04:17.970892    6778 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-395000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.856908208s)

                                                
                                                
-- stdout --
	* [kubenet-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-395000" primary control-plane node in "kubenet-395000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-395000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:04:20.329038    6892 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:04:20.329197    6892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:20.329200    6892 out.go:358] Setting ErrFile to fd 2...
	I1204 13:04:20.329203    6892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:20.329323    6892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:04:20.330506    6892 out.go:352] Setting JSON to false
	I1204 13:04:20.348323    6892 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5631,"bootTime":1733340629,"procs":580,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:04:20.348401    6892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:04:20.354940    6892 out.go:177] * [kubenet-395000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:04:20.360955    6892 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:04:20.360986    6892 notify.go:220] Checking for updates...
	I1204 13:04:20.368862    6892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:04:20.371939    6892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:04:20.374937    6892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:04:20.377809    6892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:04:20.384769    6892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:04:20.388234    6892 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:04:20.388307    6892 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:04:20.388370    6892 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:04:20.392871    6892 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:04:20.397901    6892 start.go:297] selected driver: qemu2
	I1204 13:04:20.397908    6892 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:04:20.397916    6892 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:04:20.400239    6892 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:04:20.402907    6892 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:04:20.406826    6892 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:04:20.406849    6892 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1204 13:04:20.406885    6892 start.go:340] cluster config:
	{Name:kubenet-395000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:04:20.411408    6892 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:20.420040    6892 out.go:177] * Starting "kubenet-395000" primary control-plane node in "kubenet-395000" cluster
	I1204 13:04:20.423969    6892 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:04:20.423986    6892 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:04:20.423997    6892 cache.go:56] Caching tarball of preloaded images
	I1204 13:04:20.424071    6892 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:04:20.424076    6892 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:04:20.424136    6892 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/kubenet-395000/config.json ...
	I1204 13:04:20.424146    6892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/kubenet-395000/config.json: {Name:mka368f2f089e7f11a9c26176699c7e1469932e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:04:20.424386    6892 start.go:360] acquireMachinesLock for kubenet-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:04:20.424430    6892 start.go:364] duration metric: took 38.958µs to acquireMachinesLock for "kubenet-395000"
	I1204 13:04:20.424442    6892 start.go:93] Provisioning new machine with config: &{Name:kubenet-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:04:20.424467    6892 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:04:20.432883    6892 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:04:20.449165    6892 start.go:159] libmachine.API.Create for "kubenet-395000" (driver="qemu2")
	I1204 13:04:20.449202    6892 client.go:168] LocalClient.Create starting
	I1204 13:04:20.449273    6892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:04:20.449311    6892 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:20.449326    6892 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:20.449383    6892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:04:20.449415    6892 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:20.449421    6892 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:20.449809    6892 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:04:20.609382    6892 main.go:141] libmachine: Creating SSH key...
	I1204 13:04:20.676711    6892 main.go:141] libmachine: Creating Disk image...
	I1204 13:04:20.676717    6892 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:04:20.676945    6892 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2
	I1204 13:04:20.686976    6892 main.go:141] libmachine: STDOUT: 
	I1204 13:04:20.686998    6892 main.go:141] libmachine: STDERR: 
	I1204 13:04:20.687056    6892 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2 +20000M
	I1204 13:04:20.695628    6892 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:04:20.695642    6892 main.go:141] libmachine: STDERR: 
	I1204 13:04:20.695664    6892 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2
	I1204 13:04:20.695673    6892 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:04:20.695685    6892 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:04:20.695716    6892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:54:51:d8:62:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2
	I1204 13:04:20.697652    6892 main.go:141] libmachine: STDOUT: 
	I1204 13:04:20.697665    6892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:04:20.697689    6892 client.go:171] duration metric: took 248.477584ms to LocalClient.Create
	I1204 13:04:22.699862    6892 start.go:128] duration metric: took 2.27535225s to createHost
	I1204 13:04:22.699909    6892 start.go:83] releasing machines lock for "kubenet-395000", held for 2.2754445s
	W1204 13:04:22.699946    6892 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:22.711217    6892 out.go:177] * Deleting "kubenet-395000" in qemu2 ...
	W1204 13:04:22.733988    6892 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:22.734003    6892 start.go:729] Will try again in 5 seconds ...
	I1204 13:04:27.736212    6892 start.go:360] acquireMachinesLock for kubenet-395000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:04:27.736399    6892 start.go:364] duration metric: took 126.292µs to acquireMachinesLock for "kubenet-395000"
	I1204 13:04:27.736432    6892 start.go:93] Provisioning new machine with config: &{Name:kubenet-395000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-395000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:04:27.736485    6892 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:04:27.740229    6892 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 13:04:27.759719    6892 start.go:159] libmachine.API.Create for "kubenet-395000" (driver="qemu2")
	I1204 13:04:27.759751    6892 client.go:168] LocalClient.Create starting
	I1204 13:04:27.759842    6892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:04:27.759889    6892 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:27.759899    6892 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:27.759940    6892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:04:27.759969    6892 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:27.759977    6892 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:27.760326    6892 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:04:27.991224    6892 main.go:141] libmachine: Creating SSH key...
	I1204 13:04:28.087609    6892 main.go:141] libmachine: Creating Disk image...
	I1204 13:04:28.087622    6892 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:04:28.090472    6892 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2
	I1204 13:04:28.104190    6892 main.go:141] libmachine: STDOUT: 
	I1204 13:04:28.104221    6892 main.go:141] libmachine: STDERR: 
	I1204 13:04:28.104314    6892 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2 +20000M
	I1204 13:04:28.114866    6892 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:04:28.114887    6892 main.go:141] libmachine: STDERR: 
	I1204 13:04:28.114904    6892 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2
	I1204 13:04:28.114911    6892 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:04:28.114922    6892 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:04:28.114954    6892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:d0:a9:15:b7:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/kubenet-395000/disk.qcow2
	I1204 13:04:28.117168    6892 main.go:141] libmachine: STDOUT: 
	I1204 13:04:28.117184    6892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:04:28.117198    6892 client.go:171] duration metric: took 357.4395ms to LocalClient.Create
	I1204 13:04:30.119441    6892 start.go:128] duration metric: took 2.382898208s to createHost
	I1204 13:04:30.119546    6892 start.go:83] releasing machines lock for "kubenet-395000", held for 2.3831075s
	W1204 13:04:30.119903    6892 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:30.129575    6892 out.go:201] 
	W1204 13:04:30.132603    6892 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:04:30.132626    6892 out.go:270] * 
	* 
	W1204 13:04:30.135451    6892 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:04:30.146586    6892 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-570000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-570000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.865138167s)

                                                
                                                
-- stdout --
	* [old-k8s-version-570000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-570000" primary control-plane node in "old-k8s-version-570000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-570000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:04:32.510764    7007 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:04:32.510917    7007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:32.510920    7007 out.go:358] Setting ErrFile to fd 2...
	I1204 13:04:32.510923    7007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:32.511040    7007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:04:32.512195    7007 out.go:352] Setting JSON to false
	I1204 13:04:32.530352    7007 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5643,"bootTime":1733340629,"procs":579,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:04:32.530433    7007 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:04:32.535648    7007 out.go:177] * [old-k8s-version-570000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:04:32.544622    7007 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:04:32.544684    7007 notify.go:220] Checking for updates...
	I1204 13:04:32.552676    7007 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:04:32.556565    7007 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:04:32.559554    7007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:04:32.562618    7007 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:04:32.565623    7007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:04:32.568914    7007 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:04:32.568992    7007 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:04:32.569042    7007 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:04:32.572591    7007 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:04:32.579615    7007 start.go:297] selected driver: qemu2
	I1204 13:04:32.579622    7007 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:04:32.579636    7007 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:04:32.581976    7007 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:04:32.585575    7007 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:04:32.588711    7007 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:04:32.588731    7007 cni.go:84] Creating CNI manager for ""
	I1204 13:04:32.588760    7007 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1204 13:04:32.588785    7007 start.go:340] cluster config:
	{Name:old-k8s-version-570000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:04:32.592960    7007 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:32.601640    7007 out.go:177] * Starting "old-k8s-version-570000" primary control-plane node in "old-k8s-version-570000" cluster
	I1204 13:04:32.605622    7007 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 13:04:32.605636    7007 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 13:04:32.605643    7007 cache.go:56] Caching tarball of preloaded images
	I1204 13:04:32.605718    7007 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:04:32.605723    7007 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1204 13:04:32.605772    7007 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/old-k8s-version-570000/config.json ...
	I1204 13:04:32.605782    7007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/old-k8s-version-570000/config.json: {Name:mkc936523f0ca6f6fc84f042fa826fa135d83b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:04:32.606264    7007 start.go:360] acquireMachinesLock for old-k8s-version-570000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:04:32.606311    7007 start.go:364] duration metric: took 40.208µs to acquireMachinesLock for "old-k8s-version-570000"
	I1204 13:04:32.606327    7007 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:04:32.606355    7007 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:04:32.613574    7007 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 13:04:32.628071    7007 start.go:159] libmachine.API.Create for "old-k8s-version-570000" (driver="qemu2")
	I1204 13:04:32.628100    7007 client.go:168] LocalClient.Create starting
	I1204 13:04:32.628173    7007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:04:32.628208    7007 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:32.628222    7007 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:32.628267    7007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:04:32.628299    7007 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:32.628305    7007 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:32.628831    7007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:04:32.785634    7007 main.go:141] libmachine: Creating SSH key...
	I1204 13:04:32.872670    7007 main.go:141] libmachine: Creating Disk image...
	I1204 13:04:32.872682    7007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:04:32.872915    7007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2
	I1204 13:04:32.883200    7007 main.go:141] libmachine: STDOUT: 
	I1204 13:04:32.883214    7007 main.go:141] libmachine: STDERR: 
	I1204 13:04:32.883277    7007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2 +20000M
	I1204 13:04:32.891826    7007 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:04:32.891840    7007 main.go:141] libmachine: STDERR: 
	I1204 13:04:32.891856    7007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2
	I1204 13:04:32.891862    7007 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:04:32.891879    7007 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:04:32.891924    7007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:c0:79:29:54:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2
	I1204 13:04:32.893796    7007 main.go:141] libmachine: STDOUT: 
	I1204 13:04:32.893808    7007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:04:32.893828    7007 client.go:171] duration metric: took 265.720458ms to LocalClient.Create
	I1204 13:04:34.896047    7007 start.go:128] duration metric: took 2.289637041s to createHost
	I1204 13:04:34.896123    7007 start.go:83] releasing machines lock for "old-k8s-version-570000", held for 2.28977475s
	W1204 13:04:34.896179    7007 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:34.907135    7007 out.go:177] * Deleting "old-k8s-version-570000" in qemu2 ...
	W1204 13:04:34.936002    7007 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:34.936042    7007 start.go:729] Will try again in 5 seconds ...
	I1204 13:04:39.938315    7007 start.go:360] acquireMachinesLock for old-k8s-version-570000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:04:39.939064    7007 start.go:364] duration metric: took 607.459µs to acquireMachinesLock for "old-k8s-version-570000"
	I1204 13:04:39.939170    7007 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:04:39.939477    7007 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:04:39.950029    7007 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 13:04:39.993468    7007 start.go:159] libmachine.API.Create for "old-k8s-version-570000" (driver="qemu2")
	I1204 13:04:39.993511    7007 client.go:168] LocalClient.Create starting
	I1204 13:04:39.993669    7007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:04:39.993735    7007 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:39.993755    7007 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:39.993825    7007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:04:39.993877    7007 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:39.993888    7007 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:39.994591    7007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:04:40.163838    7007 main.go:141] libmachine: Creating SSH key...
	I1204 13:04:40.275166    7007 main.go:141] libmachine: Creating Disk image...
	I1204 13:04:40.275176    7007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:04:40.275412    7007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2
	I1204 13:04:40.286329    7007 main.go:141] libmachine: STDOUT: 
	I1204 13:04:40.286348    7007 main.go:141] libmachine: STDERR: 
	I1204 13:04:40.286411    7007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2 +20000M
	I1204 13:04:40.295747    7007 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:04:40.295771    7007 main.go:141] libmachine: STDERR: 
	I1204 13:04:40.295784    7007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2
	I1204 13:04:40.295787    7007 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:04:40.295800    7007 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:04:40.295828    7007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:74:ea:7f:02:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2
	I1204 13:04:40.297820    7007 main.go:141] libmachine: STDOUT: 
	I1204 13:04:40.297836    7007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:04:40.297849    7007 client.go:171] duration metric: took 304.330375ms to LocalClient.Create
	I1204 13:04:42.299982    7007 start.go:128] duration metric: took 2.360456875s to createHost
	I1204 13:04:42.300039    7007 start.go:83] releasing machines lock for "old-k8s-version-570000", held for 2.360897875s
	W1204 13:04:42.300236    7007 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-570000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:42.314568    7007 out.go:201] 
	W1204 13:04:42.318635    7007 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:04:42.318646    7007 out.go:270] * 
	* 
	W1204 13:04:42.319580    7007 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:04:42.334537    7007 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-570000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000: exit status 7 (49.107458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-570000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-570000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-570000 create -f testdata/busybox.yaml: exit status 1 (28.407125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-570000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-570000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000: exit status 7 (34.184333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-570000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000: exit status 7 (34.510625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-570000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-570000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-570000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-570000 describe deploy/metrics-server -n kube-system: exit status 1 (28.185291ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-570000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-570000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000: exit status 7 (34.224958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-570000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-570000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-570000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.2460155s)

                                                
                                                
-- stdout --
	* [old-k8s-version-570000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-570000" primary control-plane node in "old-k8s-version-570000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-570000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-570000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:04:44.572914    7054 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:04:44.576629    7054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:44.576633    7054 out.go:358] Setting ErrFile to fd 2...
	I1204 13:04:44.576636    7054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:44.576785    7054 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:04:44.579893    7054 out.go:352] Setting JSON to false
	I1204 13:04:44.599511    7054 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5655,"bootTime":1733340629,"procs":582,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:04:44.599581    7054 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:04:44.607545    7054 out.go:177] * [old-k8s-version-570000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:04:44.618660    7054 notify.go:220] Checking for updates...
	I1204 13:04:44.622525    7054 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:04:44.630550    7054 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:04:44.633603    7054 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:04:44.636547    7054 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:04:44.644546    7054 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:04:44.659548    7054 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:04:44.663889    7054 config.go:182] Loaded profile config "old-k8s-version-570000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1204 13:04:44.669491    7054 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1204 13:04:44.673520    7054 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:04:44.676526    7054 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 13:04:44.683566    7054 start.go:297] selected driver: qemu2
	I1204 13:04:44.683574    7054 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:04:44.683644    7054 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:04:44.686276    7054 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:04:44.686300    7054 cni.go:84] Creating CNI manager for ""
	I1204 13:04:44.686322    7054 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1204 13:04:44.686349    7054 start.go:340] cluster config:
	{Name:old-k8s-version-570000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-570000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:04:44.690641    7054 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:44.704556    7054 out.go:177] * Starting "old-k8s-version-570000" primary control-plane node in "old-k8s-version-570000" cluster
	I1204 13:04:44.712547    7054 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 13:04:44.712574    7054 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 13:04:44.712582    7054 cache.go:56] Caching tarball of preloaded images
	I1204 13:04:44.712674    7054 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:04:44.712680    7054 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1204 13:04:44.712736    7054 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/old-k8s-version-570000/config.json ...
	I1204 13:04:44.713135    7054 start.go:360] acquireMachinesLock for old-k8s-version-570000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:04:44.713165    7054 start.go:364] duration metric: took 22.958µs to acquireMachinesLock for "old-k8s-version-570000"
	I1204 13:04:44.713178    7054 start.go:96] Skipping create...Using existing machine configuration
	I1204 13:04:44.713181    7054 fix.go:54] fixHost starting: 
	I1204 13:04:44.713296    7054 fix.go:112] recreateIfNeeded on old-k8s-version-570000: state=Stopped err=<nil>
	W1204 13:04:44.713307    7054 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 13:04:44.717535    7054 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-570000" ...
	I1204 13:04:44.724536    7054 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:04:44.724596    7054 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:74:ea:7f:02:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2
	I1204 13:04:44.726790    7054 main.go:141] libmachine: STDOUT: 
	I1204 13:04:44.726807    7054 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:04:44.726839    7054 fix.go:56] duration metric: took 13.654625ms for fixHost
	I1204 13:04:44.726845    7054 start.go:83] releasing machines lock for "old-k8s-version-570000", held for 13.675041ms
	W1204 13:04:44.726851    7054 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:04:44.726896    7054 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:44.726900    7054 start.go:729] Will try again in 5 seconds ...
	I1204 13:04:49.729144    7054 start.go:360] acquireMachinesLock for old-k8s-version-570000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:04:49.729583    7054 start.go:364] duration metric: took 358.625µs to acquireMachinesLock for "old-k8s-version-570000"
	I1204 13:04:49.729716    7054 start.go:96] Skipping create...Using existing machine configuration
	I1204 13:04:49.729731    7054 fix.go:54] fixHost starting: 
	I1204 13:04:49.730367    7054 fix.go:112] recreateIfNeeded on old-k8s-version-570000: state=Stopped err=<nil>
	W1204 13:04:49.730390    7054 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 13:04:49.738696    7054 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-570000" ...
	I1204 13:04:49.742811    7054 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:04:49.742945    7054 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:74:ea:7f:02:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/old-k8s-version-570000/disk.qcow2
	I1204 13:04:49.751856    7054 main.go:141] libmachine: STDOUT: 
	I1204 13:04:49.751909    7054 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:04:49.751987    7054 fix.go:56] duration metric: took 22.255958ms for fixHost
	I1204 13:04:49.752044    7054 start.go:83] releasing machines lock for "old-k8s-version-570000", held for 22.398292ms
	W1204 13:04:49.752240    7054 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-570000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-570000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:49.758789    7054 out.go:201] 
	W1204 13:04:49.762874    7054 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:04:49.762916    7054 out.go:270] * 
	* 
	W1204 13:04:49.764909    7054 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:04:49.773293    7054 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-570000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000: exit status 7 (59.52775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-570000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-570000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000: exit status 7 (35.570834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-570000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-570000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-570000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-570000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.786167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-570000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-570000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000: exit status 7 (33.663458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-570000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-570000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000: exit status 7 (34.462167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-570000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-570000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-570000 --alsologtostderr -v=1: exit status 83 (46.867ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-570000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-570000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:04:50.056695    7077 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:04:50.057090    7077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:50.057094    7077 out.go:358] Setting ErrFile to fd 2...
	I1204 13:04:50.057096    7077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:50.057224    7077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:04:50.057426    7077 out.go:352] Setting JSON to false
	I1204 13:04:50.057437    7077 mustload.go:65] Loading cluster: old-k8s-version-570000
	I1204 13:04:50.057649    7077 config.go:182] Loaded profile config "old-k8s-version-570000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1204 13:04:50.061887    7077 out.go:177] * The control-plane node old-k8s-version-570000 host is not running: state=Stopped
	I1204 13:04:50.066908    7077 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-570000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-570000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000: exit status 7 (33.638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-570000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000: exit status 7 (33.409833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-570000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-676000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-676000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.946841666s)

                                                
                                                
-- stdout --
	* [no-preload-676000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-676000" primary control-plane node in "no-preload-676000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:04:50.406542    7096 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:04:50.406725    7096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:50.406728    7096 out.go:358] Setting ErrFile to fd 2...
	I1204 13:04:50.406731    7096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:04:50.406876    7096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:04:50.408094    7096 out.go:352] Setting JSON to false
	I1204 13:04:50.426766    7096 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5661,"bootTime":1733340629,"procs":584,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:04:50.426862    7096 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:04:50.430832    7096 out.go:177] * [no-preload-676000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:04:50.438719    7096 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:04:50.438812    7096 notify.go:220] Checking for updates...
	I1204 13:04:50.445744    7096 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:04:50.448769    7096 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:04:50.452606    7096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:04:50.455757    7096 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:04:50.458784    7096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:04:50.462077    7096 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:04:50.462138    7096 config.go:182] Loaded profile config "stopped-upgrade-827000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 13:04:50.462180    7096 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:04:50.465691    7096 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:04:50.472762    7096 start.go:297] selected driver: qemu2
	I1204 13:04:50.472768    7096 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:04:50.472773    7096 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:04:50.475191    7096 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:04:50.477706    7096 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:04:50.481784    7096 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:04:50.481803    7096 cni.go:84] Creating CNI manager for ""
	I1204 13:04:50.481822    7096 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 13:04:50.481827    7096 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 13:04:50.481871    7096 start.go:340] cluster config:
	{Name:no-preload-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-676000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:04:50.486164    7096 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:50.494707    7096 out.go:177] * Starting "no-preload-676000" primary control-plane node in "no-preload-676000" cluster
	I1204 13:04:50.498824    7096 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:04:50.498918    7096 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/no-preload-676000/config.json ...
	I1204 13:04:50.498943    7096 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/no-preload-676000/config.json: {Name:mk860c7f674ee96248a821773cb8e17d8e6e4041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:04:50.498947    7096 cache.go:107] acquiring lock: {Name:mk34f87c1a801b7b524d07135d4ba91d3d9ee3f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:50.498958    7096 cache.go:107] acquiring lock: {Name:mk889a1f0064799ac8aa0d2b04307d425841de4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:50.498988    7096 cache.go:107] acquiring lock: {Name:mkca2bde39ff973e51b0f079802ed95502453a7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:50.499034    7096 cache.go:115] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1204 13:04:50.499041    7096 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.833µs
	I1204 13:04:50.499047    7096 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1204 13:04:50.498941    7096 cache.go:107] acquiring lock: {Name:mk862020274fc0afae38b6d5d38ee1c64d930c0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:50.499085    7096 cache.go:107] acquiring lock: {Name:mk8719664df09edc21aac662ff40226da34e36bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:50.499079    7096 cache.go:107] acquiring lock: {Name:mkb5b48ed25a808d6e586abf67bfeedd336e7bb7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:50.499113    7096 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 13:04:50.499149    7096 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 13:04:50.499102    7096 cache.go:107] acquiring lock: {Name:mk6888752d904026694bda75a51ccff1e7a46bd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:50.499181    7096 cache.go:107] acquiring lock: {Name:mk47a1b03926e1d60820606643ffd8bf468a00e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:04:50.499197    7096 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 13:04:50.499364    7096 start.go:360] acquireMachinesLock for no-preload-676000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:04:50.499416    7096 start.go:364] duration metric: took 46.583µs to acquireMachinesLock for "no-preload-676000"
	I1204 13:04:50.499420    7096 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1204 13:04:50.499428    7096 start.go:93] Provisioning new machine with config: &{Name:no-preload-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-676000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:04:50.499464    7096 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:04:50.499488    7096 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 13:04:50.499493    7096 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 13:04:50.499524    7096 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1204 13:04:50.507695    7096 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 13:04:50.513161    7096 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 13:04:50.513421    7096 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 13:04:50.514605    7096 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1204 13:04:50.514636    7096 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 13:04:50.514635    7096 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 13:04:50.514630    7096 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 13:04:50.514691    7096 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1204 13:04:50.524544    7096 start.go:159] libmachine.API.Create for "no-preload-676000" (driver="qemu2")
	I1204 13:04:50.524565    7096 client.go:168] LocalClient.Create starting
	I1204 13:04:50.524654    7096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:04:50.524691    7096 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:50.524710    7096 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:50.524749    7096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:04:50.524777    7096 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:50.524784    7096 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:50.525164    7096 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:04:50.690267    7096 main.go:141] libmachine: Creating SSH key...
	I1204 13:04:50.836421    7096 main.go:141] libmachine: Creating Disk image...
	I1204 13:04:50.836470    7096 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:04:50.836827    7096 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2
	I1204 13:04:50.846955    7096 main.go:141] libmachine: STDOUT: 
	I1204 13:04:50.846974    7096 main.go:141] libmachine: STDERR: 
	I1204 13:04:50.847040    7096 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2 +20000M
	I1204 13:04:50.855714    7096 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:04:50.855729    7096 main.go:141] libmachine: STDERR: 
	I1204 13:04:50.855745    7096 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2
	I1204 13:04:50.855750    7096 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:04:50.855764    7096 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:04:50.855798    7096 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:0e:ff:60:2c:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2
	I1204 13:04:50.857711    7096 main.go:141] libmachine: STDOUT: 
	I1204 13:04:50.857725    7096 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:04:50.857742    7096 client.go:171] duration metric: took 333.167167ms to LocalClient.Create
	I1204 13:04:50.982013    7096 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1204 13:04:50.989294    7096 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1204 13:04:51.032238    7096 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1204 13:04:51.036104    7096 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1204 13:04:51.162795    7096 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1204 13:04:51.172582    7096 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1204 13:04:51.212661    7096 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1204 13:04:51.212672    7096 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 713.651417ms
	I1204 13:04:51.212678    7096 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1204 13:04:51.271329    7096 cache.go:162] opening:  /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1204 13:04:52.857951    7096 start.go:128] duration metric: took 2.358440917s to createHost
	I1204 13:04:52.857982    7096 start.go:83] releasing machines lock for "no-preload-676000", held for 2.358532459s
	W1204 13:04:52.858016    7096 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:52.871891    7096 out.go:177] * Deleting "no-preload-676000" in qemu2 ...
	W1204 13:04:52.890669    7096 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:04:52.890682    7096 start.go:729] Will try again in 5 seconds ...
	I1204 13:04:54.493132    7096 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1204 13:04:54.493147    7096 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.994034833s
	I1204 13:04:54.493156    7096 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1204 13:04:54.798907    7096 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1204 13:04:54.798936    7096 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 4.299926541s
	I1204 13:04:54.798952    7096 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1204 13:04:54.964656    7096 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1204 13:04:54.964689    7096 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 4.465675209s
	I1204 13:04:54.964701    7096 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1204 13:04:54.992402    7096 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1204 13:04:54.992414    7096 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 4.493321833s
	I1204 13:04:54.992426    7096 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1204 13:04:56.053594    7096 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1204 13:04:56.053620    7096 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 5.554623458s
	I1204 13:04:56.053634    7096 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1204 13:04:57.890978    7096 start.go:360] acquireMachinesLock for no-preload-676000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:04:57.891613    7096 start.go:364] duration metric: took 548.916µs to acquireMachinesLock for "no-preload-676000"
	I1204 13:04:57.891753    7096 start.go:93] Provisioning new machine with config: &{Name:no-preload-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-676000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:04:57.892006    7096 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:04:57.905588    7096 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 13:04:57.953500    7096 start.go:159] libmachine.API.Create for "no-preload-676000" (driver="qemu2")
	I1204 13:04:57.953590    7096 client.go:168] LocalClient.Create starting
	I1204 13:04:57.953808    7096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:04:57.953909    7096 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:57.953931    7096 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:57.954008    7096 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:04:57.954067    7096 main.go:141] libmachine: Decoding PEM data...
	I1204 13:04:57.954081    7096 main.go:141] libmachine: Parsing certificate...
	I1204 13:04:57.954664    7096 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:04:58.123699    7096 main.go:141] libmachine: Creating SSH key...
	I1204 13:04:58.252215    7096 main.go:141] libmachine: Creating Disk image...
	I1204 13:04:58.252225    7096 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:04:58.252448    7096 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2
	I1204 13:04:58.262848    7096 main.go:141] libmachine: STDOUT: 
	I1204 13:04:58.262871    7096 main.go:141] libmachine: STDERR: 
	I1204 13:04:58.262933    7096 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2 +20000M
	I1204 13:04:58.271784    7096 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:04:58.271846    7096 main.go:141] libmachine: STDERR: 
	I1204 13:04:58.271859    7096 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2
	I1204 13:04:58.271866    7096 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:04:58.271876    7096 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:04:58.271915    7096 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d6:da:39:9d:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2
	I1204 13:04:58.273921    7096 main.go:141] libmachine: STDOUT: 
	I1204 13:04:58.273935    7096 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:04:58.273947    7096 client.go:171] duration metric: took 320.334583ms to LocalClient.Create
	I1204 13:04:59.324905    7096 cache.go:157] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1204 13:04:59.324963    7096 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.825724334s
	I1204 13:04:59.324998    7096 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1204 13:04:59.325059    7096 cache.go:87] Successfully saved all images to host disk.
	I1204 13:05:00.276199    7096 start.go:128] duration metric: took 2.384101958s to createHost
	I1204 13:05:00.276296    7096 start.go:83] releasing machines lock for "no-preload-676000", held for 2.384625209s
	W1204 13:05:00.276570    7096 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:00.287237    7096 out.go:201] 
	W1204 13:05:00.296277    7096 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:00.296313    7096 out.go:270] * 
	* 
	W1204 13:05:00.299130    7096 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:05:00.307216    7096 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-676000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000: exit status 7 (67.766208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-676000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-676000 create -f testdata/busybox.yaml: exit status 1 (30.738083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-676000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-676000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000: exit status 7 (34.411625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-676000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000: exit status 7 (34.66425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-676000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-676000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-676000 describe deploy/metrics-server -n kube-system: exit status 1 (28.918166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-676000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-676000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000: exit status 7 (33.654709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-467000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-467000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.95718475s)

                                                
                                                
-- stdout --
	* [embed-certs-467000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-467000" primary control-plane node in "embed-certs-467000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-467000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:05:00.792858    7157 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:05:00.793010    7157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:00.793014    7157 out.go:358] Setting ErrFile to fd 2...
	I1204 13:05:00.793016    7157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:00.793172    7157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:05:00.794311    7157 out.go:352] Setting JSON to false
	I1204 13:05:00.812453    7157 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5671,"bootTime":1733340629,"procs":579,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:05:00.812524    7157 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:05:00.817336    7157 out.go:177] * [embed-certs-467000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:05:00.825347    7157 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:05:00.825419    7157 notify.go:220] Checking for updates...
	I1204 13:05:00.832233    7157 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:05:00.835314    7157 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:05:00.839309    7157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:05:00.842301    7157 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:05:00.845347    7157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:05:00.849078    7157 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:00.849184    7157 config.go:182] Loaded profile config "no-preload-676000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:00.849248    7157 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:05:00.852265    7157 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:05:00.859325    7157 start.go:297] selected driver: qemu2
	I1204 13:05:00.859332    7157 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:05:00.859338    7157 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:05:00.861926    7157 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:05:00.865310    7157 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:05:00.868418    7157 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:05:00.868444    7157 cni.go:84] Creating CNI manager for ""
	I1204 13:05:00.868469    7157 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 13:05:00.868476    7157 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 13:05:00.868512    7157 start.go:340] cluster config:
	{Name:embed-certs-467000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-467000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:05:00.873206    7157 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:00.881342    7157 out.go:177] * Starting "embed-certs-467000" primary control-plane node in "embed-certs-467000" cluster
	I1204 13:05:00.885318    7157 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:05:00.885335    7157 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:05:00.885344    7157 cache.go:56] Caching tarball of preloaded images
	I1204 13:05:00.885437    7157 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:05:00.885443    7157 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:05:00.885508    7157 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/embed-certs-467000/config.json ...
	I1204 13:05:00.885519    7157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/embed-certs-467000/config.json: {Name:mkf17c8c9bf5166dc2be10a6f3ec8415c9fe7f3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:05:00.885987    7157 start.go:360] acquireMachinesLock for embed-certs-467000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:00.886046    7157 start.go:364] duration metric: took 46µs to acquireMachinesLock for "embed-certs-467000"
	I1204 13:05:00.886060    7157 start.go:93] Provisioning new machine with config: &{Name:embed-certs-467000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-467000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:05:00.886092    7157 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:05:00.895329    7157 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 13:05:00.913393    7157 start.go:159] libmachine.API.Create for "embed-certs-467000" (driver="qemu2")
	I1204 13:05:00.913421    7157 client.go:168] LocalClient.Create starting
	I1204 13:05:00.913501    7157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:05:00.913544    7157 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:00.913562    7157 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:00.913605    7157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:05:00.913636    7157 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:00.913643    7157 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:00.914115    7157 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:05:01.074564    7157 main.go:141] libmachine: Creating SSH key...
	I1204 13:05:01.192148    7157 main.go:141] libmachine: Creating Disk image...
	I1204 13:05:01.192156    7157 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:05:01.192377    7157 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2
	I1204 13:05:01.201858    7157 main.go:141] libmachine: STDOUT: 
	I1204 13:05:01.201879    7157 main.go:141] libmachine: STDERR: 
	I1204 13:05:01.201949    7157 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2 +20000M
	I1204 13:05:01.210341    7157 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:05:01.210357    7157 main.go:141] libmachine: STDERR: 
	I1204 13:05:01.210375    7157 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2
	I1204 13:05:01.210380    7157 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:05:01.210394    7157 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:01.210424    7157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:d9:0e:83:8d:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2
	I1204 13:05:01.212251    7157 main.go:141] libmachine: STDOUT: 
	I1204 13:05:01.212266    7157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:01.212286    7157 client.go:171] duration metric: took 298.855416ms to LocalClient.Create
	I1204 13:05:03.214518    7157 start.go:128] duration metric: took 2.32837325s to createHost
	I1204 13:05:03.214591    7157 start.go:83] releasing machines lock for "embed-certs-467000", held for 2.328505333s
	W1204 13:05:03.214680    7157 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:03.221206    7157 out.go:177] * Deleting "embed-certs-467000" in qemu2 ...
	W1204 13:05:03.251427    7157 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:03.251499    7157 start.go:729] Will try again in 5 seconds ...
	I1204 13:05:08.253801    7157 start.go:360] acquireMachinesLock for embed-certs-467000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:08.254332    7157 start.go:364] duration metric: took 429.583µs to acquireMachinesLock for "embed-certs-467000"
	I1204 13:05:08.254453    7157 start.go:93] Provisioning new machine with config: &{Name:embed-certs-467000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-467000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:05:08.254832    7157 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:05:08.264433    7157 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 13:05:08.312729    7157 start.go:159] libmachine.API.Create for "embed-certs-467000" (driver="qemu2")
	I1204 13:05:08.312783    7157 client.go:168] LocalClient.Create starting
	I1204 13:05:08.312916    7157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:05:08.312995    7157 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:08.313009    7157 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:08.313088    7157 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:05:08.313145    7157 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:08.313156    7157 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:08.313842    7157 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:05:08.485667    7157 main.go:141] libmachine: Creating SSH key...
	I1204 13:05:08.629489    7157 main.go:141] libmachine: Creating Disk image...
	I1204 13:05:08.629496    7157 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:05:08.629735    7157 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2
	I1204 13:05:08.639995    7157 main.go:141] libmachine: STDOUT: 
	I1204 13:05:08.640017    7157 main.go:141] libmachine: STDERR: 
	I1204 13:05:08.640074    7157 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2 +20000M
	I1204 13:05:08.648502    7157 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:05:08.648528    7157 main.go:141] libmachine: STDERR: 
	I1204 13:05:08.648539    7157 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2
	I1204 13:05:08.648543    7157 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:05:08.648550    7157 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:08.648596    7157 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:62:32:8d:e0:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2
	I1204 13:05:08.650384    7157 main.go:141] libmachine: STDOUT: 
	I1204 13:05:08.650404    7157 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:08.650418    7157 client.go:171] duration metric: took 337.626125ms to LocalClient.Create
	I1204 13:05:10.652651    7157 start.go:128] duration metric: took 2.397755834s to createHost
	I1204 13:05:10.652716    7157 start.go:83] releasing machines lock for "embed-certs-467000", held for 2.39832775s
	W1204 13:05:10.653107    7157 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-467000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-467000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:10.662688    7157 out.go:201] 
	W1204 13:05:10.680761    7157 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:10.680842    7157 out.go:270] * 
	* 
	W1204 13:05:10.683494    7157 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:05:10.696621    7157 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-467000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000: exit status 7 (68.765125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-676000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-676000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (6.609996333s)

                                                
                                                
-- stdout --
	* [no-preload-676000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-676000" primary control-plane node in "no-preload-676000" cluster
	* Restarting existing qemu2 VM for "no-preload-676000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-676000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:05:04.153067    7189 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:05:04.153212    7189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:04.153215    7189 out.go:358] Setting ErrFile to fd 2...
	I1204 13:05:04.153217    7189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:04.153339    7189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:05:04.154480    7189 out.go:352] Setting JSON to false
	I1204 13:05:04.173213    7189 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5675,"bootTime":1733340629,"procs":583,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:05:04.173354    7189 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:05:04.178620    7189 out.go:177] * [no-preload-676000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:05:04.185437    7189 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:05:04.185497    7189 notify.go:220] Checking for updates...
	I1204 13:05:04.192426    7189 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:05:04.195477    7189 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:05:04.199461    7189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:05:04.202460    7189 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:05:04.205473    7189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:05:04.208735    7189 config.go:182] Loaded profile config "no-preload-676000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:04.209013    7189 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:05:04.212453    7189 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 13:05:04.219404    7189 start.go:297] selected driver: qemu2
	I1204 13:05:04.219411    7189 start.go:901] validating driver "qemu2" against &{Name:no-preload-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-676000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:05:04.219473    7189 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:05:04.222118    7189 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:05:04.222141    7189 cni.go:84] Creating CNI manager for ""
	I1204 13:05:04.222168    7189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 13:05:04.222201    7189 start.go:340] cluster config:
	{Name:no-preload-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-676000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:05:04.226739    7189 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:04.234447    7189 out.go:177] * Starting "no-preload-676000" primary control-plane node in "no-preload-676000" cluster
	I1204 13:05:04.238424    7189 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:05:04.238492    7189 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/no-preload-676000/config.json ...
	I1204 13:05:04.238512    7189 cache.go:107] acquiring lock: {Name:mk34f87c1a801b7b524d07135d4ba91d3d9ee3f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:04.238515    7189 cache.go:107] acquiring lock: {Name:mkca2bde39ff973e51b0f079802ed95502453a7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:04.238542    7189 cache.go:107] acquiring lock: {Name:mk8719664df09edc21aac662ff40226da34e36bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:04.238541    7189 cache.go:107] acquiring lock: {Name:mkb5b48ed25a808d6e586abf67bfeedd336e7bb7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:04.238564    7189 cache.go:107] acquiring lock: {Name:mk889a1f0064799ac8aa0d2b04307d425841de4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:04.238602    7189 cache.go:115] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1204 13:05:04.238565    7189 cache.go:107] acquiring lock: {Name:mk862020274fc0afae38b6d5d38ee1c64d930c0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:04.238619    7189 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.417µs
	I1204 13:05:04.238631    7189 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1204 13:05:04.238626    7189 cache.go:115] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1204 13:05:04.238638    7189 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 119.875µs
	I1204 13:05:04.238643    7189 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1204 13:05:04.238671    7189 cache.go:115] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1204 13:05:04.238669    7189 cache.go:115] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1204 13:05:04.238677    7189 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 134.708µs
	I1204 13:05:04.238681    7189 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1204 13:05:04.238677    7189 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 113.625µs
	I1204 13:05:04.238685    7189 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1204 13:05:04.238682    7189 cache.go:107] acquiring lock: {Name:mk6888752d904026694bda75a51ccff1e7a46bd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:04.238690    7189 cache.go:107] acquiring lock: {Name:mk47a1b03926e1d60820606643ffd8bf468a00e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:04.238800    7189 cache.go:115] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1204 13:05:04.238801    7189 cache.go:115] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1204 13:05:04.238800    7189 cache.go:115] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1204 13:05:04.238806    7189 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 257.666µs
	I1204 13:05:04.238813    7189 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1204 13:05:04.238813    7189 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 310.375µs
	I1204 13:05:04.238814    7189 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 179.875µs
	I1204 13:05:04.238818    7189 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1204 13:05:04.238819    7189 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1204 13:05:04.238800    7189 cache.go:115] /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1204 13:05:04.238826    7189 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 170.833µs
	I1204 13:05:04.238830    7189 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1204 13:05:04.238833    7189 cache.go:87] Successfully saved all images to host disk.
	I1204 13:05:04.238983    7189 start.go:360] acquireMachinesLock for no-preload-676000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:04.239031    7189 start.go:364] duration metric: took 42.416µs to acquireMachinesLock for "no-preload-676000"
	I1204 13:05:04.239041    7189 start.go:96] Skipping create...Using existing machine configuration
	I1204 13:05:04.239045    7189 fix.go:54] fixHost starting: 
	I1204 13:05:04.239172    7189 fix.go:112] recreateIfNeeded on no-preload-676000: state=Stopped err=<nil>
	W1204 13:05:04.239181    7189 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 13:05:04.246445    7189 out.go:177] * Restarting existing qemu2 VM for "no-preload-676000" ...
	I1204 13:05:04.250413    7189 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:04.250456    7189 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d6:da:39:9d:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2
	I1204 13:05:04.252892    7189 main.go:141] libmachine: STDOUT: 
	I1204 13:05:04.252916    7189 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:04.252944    7189 fix.go:56] duration metric: took 13.896083ms for fixHost
	I1204 13:05:04.252948    7189 start.go:83] releasing machines lock for "no-preload-676000", held for 13.912583ms
	W1204 13:05:04.252955    7189 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:04.252990    7189 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:04.252995    7189 start.go:729] Will try again in 5 seconds ...
	I1204 13:05:09.255317    7189 start.go:360] acquireMachinesLock for no-preload-676000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:10.652922    7189 start.go:364] duration metric: took 1.397453042s to acquireMachinesLock for "no-preload-676000"
	I1204 13:05:10.653122    7189 start.go:96] Skipping create...Using existing machine configuration
	I1204 13:05:10.653142    7189 fix.go:54] fixHost starting: 
	I1204 13:05:10.653970    7189 fix.go:112] recreateIfNeeded on no-preload-676000: state=Stopped err=<nil>
	W1204 13:05:10.653997    7189 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 13:05:10.676634    7189 out.go:177] * Restarting existing qemu2 VM for "no-preload-676000" ...
	I1204 13:05:10.684599    7189 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:10.684849    7189 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d6:da:39:9d:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/no-preload-676000/disk.qcow2
	I1204 13:05:10.695585    7189 main.go:141] libmachine: STDOUT: 
	I1204 13:05:10.695638    7189 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:10.695712    7189 fix.go:56] duration metric: took 42.574334ms for fixHost
	I1204 13:05:10.695733    7189 start.go:83] releasing machines lock for "no-preload-676000", held for 42.762541ms
	W1204 13:05:10.695928    7189 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-676000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-676000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:10.707647    7189 out.go:201] 
	W1204 13:05:10.711704    7189 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:10.711769    7189 out.go:270] * 
	* 
	W1204 13:05:10.714269    7189 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:05:10.726656    7189 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-676000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000: exit status 7 (61.159833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-467000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-467000 create -f testdata/busybox.yaml: exit status 1 (31.482084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-467000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-467000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000: exit status 7 (36.424625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-467000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000: exit status 7 (37.924333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-676000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000: exit status 7 (37.576083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-676000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-676000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-676000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.191625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-676000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-676000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000: exit status 7 (35.135459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-467000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-467000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-467000 describe deploy/metrics-server -n kube-system: exit status 1 (29.398333ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-467000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-467000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000: exit status 7 (42.715958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-676000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000: exit status 7 (35.676042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-676000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-676000 --alsologtostderr -v=1: exit status 83 (53.52475ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-676000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-676000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:05:11.024201    7225 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:05:11.024392    7225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:11.024400    7225 out.go:358] Setting ErrFile to fd 2...
	I1204 13:05:11.024402    7225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:11.024563    7225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:05:11.024785    7225 out.go:352] Setting JSON to false
	I1204 13:05:11.024793    7225 mustload.go:65] Loading cluster: no-preload-676000
	I1204 13:05:11.025011    7225 config.go:182] Loaded profile config "no-preload-676000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:11.030634    7225 out.go:177] * The control-plane node no-preload-676000 host is not running: state=Stopped
	I1204 13:05:11.037521    7225 out.go:177]   To start a cluster, run: "minikube start -p no-preload-676000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-676000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000: exit status 7 (35.731834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-676000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000: exit status 7 (31.791083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-676000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-596000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-596000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.882817792s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-596000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-596000" primary control-plane node in "default-k8s-diff-port-596000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-596000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:05:11.490298    7256 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:05:11.490453    7256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:11.490457    7256 out.go:358] Setting ErrFile to fd 2...
	I1204 13:05:11.490459    7256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:11.490591    7256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:05:11.491796    7256 out.go:352] Setting JSON to false
	I1204 13:05:11.509700    7256 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5682,"bootTime":1733340629,"procs":579,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:05:11.509766    7256 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:05:11.514548    7256 out.go:177] * [default-k8s-diff-port-596000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:05:11.519391    7256 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:05:11.519463    7256 notify.go:220] Checking for updates...
	I1204 13:05:11.527515    7256 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:05:11.530549    7256 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:05:11.533561    7256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:05:11.536520    7256 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:05:11.539515    7256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:05:11.542917    7256 config.go:182] Loaded profile config "embed-certs-467000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:11.542983    7256 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:11.543037    7256 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:05:11.547557    7256 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:05:11.554487    7256 start.go:297] selected driver: qemu2
	I1204 13:05:11.554493    7256 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:05:11.554499    7256 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:05:11.556928    7256 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 13:05:11.559481    7256 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:05:11.561026    7256 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:05:11.561054    7256 cni.go:84] Creating CNI manager for ""
	I1204 13:05:11.561077    7256 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 13:05:11.561085    7256 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 13:05:11.561116    7256 start.go:340] cluster config:
	{Name:default-k8s-diff-port-596000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:05:11.565748    7256 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:11.573553    7256 out.go:177] * Starting "default-k8s-diff-port-596000" primary control-plane node in "default-k8s-diff-port-596000" cluster
	I1204 13:05:11.577486    7256 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:05:11.577503    7256 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:05:11.577508    7256 cache.go:56] Caching tarball of preloaded images
	I1204 13:05:11.577581    7256 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:05:11.577587    7256 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:05:11.577641    7256 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/default-k8s-diff-port-596000/config.json ...
	I1204 13:05:11.577652    7256 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/default-k8s-diff-port-596000/config.json: {Name:mk979d869ac9e819684b018de4b0d73a5b870c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:05:11.578131    7256 start.go:360] acquireMachinesLock for default-k8s-diff-port-596000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:11.578183    7256 start.go:364] duration metric: took 41.958µs to acquireMachinesLock for "default-k8s-diff-port-596000"
	I1204 13:05:11.578196    7256 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:05:11.578225    7256 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:05:11.586498    7256 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 13:05:11.603204    7256 start.go:159] libmachine.API.Create for "default-k8s-diff-port-596000" (driver="qemu2")
	I1204 13:05:11.603231    7256 client.go:168] LocalClient.Create starting
	I1204 13:05:11.603299    7256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:05:11.603338    7256 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:11.603349    7256 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:11.603390    7256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:05:11.603420    7256 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:11.603429    7256 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:11.603827    7256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:05:11.761671    7256 main.go:141] libmachine: Creating SSH key...
	I1204 13:05:11.903564    7256 main.go:141] libmachine: Creating Disk image...
	I1204 13:05:11.903574    7256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:05:11.903817    7256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2
	I1204 13:05:11.914136    7256 main.go:141] libmachine: STDOUT: 
	I1204 13:05:11.914164    7256 main.go:141] libmachine: STDERR: 
	I1204 13:05:11.914224    7256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2 +20000M
	I1204 13:05:11.922849    7256 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:05:11.922864    7256 main.go:141] libmachine: STDERR: 
	I1204 13:05:11.922874    7256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2
	I1204 13:05:11.922884    7256 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:05:11.922905    7256 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:11.922936    7256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:2c:6d:da:3c:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2
	I1204 13:05:11.924717    7256 main.go:141] libmachine: STDOUT: 
	I1204 13:05:11.924745    7256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:11.924762    7256 client.go:171] duration metric: took 321.523209ms to LocalClient.Create
	I1204 13:05:13.926858    7256 start.go:128] duration metric: took 2.348595625s to createHost
	I1204 13:05:13.926877    7256 start.go:83] releasing machines lock for "default-k8s-diff-port-596000", held for 2.348660958s
	W1204 13:05:13.926894    7256 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:13.935597    7256 out.go:177] * Deleting "default-k8s-diff-port-596000" in qemu2 ...
	W1204 13:05:13.948456    7256 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:13.948466    7256 start.go:729] Will try again in 5 seconds ...
	I1204 13:05:18.950720    7256 start.go:360] acquireMachinesLock for default-k8s-diff-port-596000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:18.951122    7256 start.go:364] duration metric: took 328.458µs to acquireMachinesLock for "default-k8s-diff-port-596000"
	I1204 13:05:18.951234    7256 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:05:18.951454    7256 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:05:18.961197    7256 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 13:05:19.008508    7256 start.go:159] libmachine.API.Create for "default-k8s-diff-port-596000" (driver="qemu2")
	I1204 13:05:19.008559    7256 client.go:168] LocalClient.Create starting
	I1204 13:05:19.008717    7256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:05:19.008824    7256 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:19.008841    7256 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:19.008907    7256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:05:19.008963    7256 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:19.008977    7256 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:19.009735    7256 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:05:19.180412    7256 main.go:141] libmachine: Creating SSH key...
	I1204 13:05:19.241591    7256 main.go:141] libmachine: Creating Disk image...
	I1204 13:05:19.241597    7256 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:05:19.241783    7256 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2
	I1204 13:05:19.251684    7256 main.go:141] libmachine: STDOUT: 
	I1204 13:05:19.251785    7256 main.go:141] libmachine: STDERR: 
	I1204 13:05:19.251850    7256 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2 +20000M
	I1204 13:05:19.260340    7256 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:05:19.260355    7256 main.go:141] libmachine: STDERR: 
	I1204 13:05:19.260375    7256 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2
	I1204 13:05:19.260386    7256 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:05:19.260395    7256 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:19.260425    7256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:13:d2:3a:fd:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2
	I1204 13:05:19.262138    7256 main.go:141] libmachine: STDOUT: 
	I1204 13:05:19.262153    7256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:19.262163    7256 client.go:171] duration metric: took 253.595583ms to LocalClient.Create
	I1204 13:05:21.264357    7256 start.go:128] duration metric: took 2.312844208s to createHost
	I1204 13:05:21.264421    7256 start.go:83] releasing machines lock for "default-k8s-diff-port-596000", held for 2.313247875s
	W1204 13:05:21.264854    7256 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-596000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-596000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:21.286451    7256 out.go:201] 
	W1204 13:05:21.297476    7256 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:21.297506    7256 out.go:270] * 
	* 
	W1204 13:05:21.300145    7256 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:05:21.312581    7256 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-596000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000: exit status 7 (71.915667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-467000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-467000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (7.410090833s)

                                                
                                                
-- stdout --
	* [embed-certs-467000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-467000" primary control-plane node in "embed-certs-467000" cluster
	* Restarting existing qemu2 VM for "embed-certs-467000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-467000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:05:13.977362    7282 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:05:13.977555    7282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:13.977559    7282 out.go:358] Setting ErrFile to fd 2...
	I1204 13:05:13.977562    7282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:13.977696    7282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:05:13.978891    7282 out.go:352] Setting JSON to false
	I1204 13:05:13.997601    7282 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5684,"bootTime":1733340629,"procs":578,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:05:13.997672    7282 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:05:14.001816    7282 out.go:177] * [embed-certs-467000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:05:14.018937    7282 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:05:14.018978    7282 notify.go:220] Checking for updates...
	I1204 13:05:14.024711    7282 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:05:14.028702    7282 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:05:14.031767    7282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:05:14.033357    7282 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:05:14.036683    7282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:05:14.040039    7282 config.go:182] Loaded profile config "embed-certs-467000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:14.040348    7282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:05:14.042232    7282 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 13:05:14.049720    7282 start.go:297] selected driver: qemu2
	I1204 13:05:14.049726    7282 start.go:901] validating driver "qemu2" against &{Name:embed-certs-467000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-467000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:05:14.049772    7282 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:05:14.052424    7282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:05:14.052448    7282 cni.go:84] Creating CNI manager for ""
	I1204 13:05:14.052468    7282 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 13:05:14.052503    7282 start.go:340] cluster config:
	{Name:embed-certs-467000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-467000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:05:14.057084    7282 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:14.065719    7282 out.go:177] * Starting "embed-certs-467000" primary control-plane node in "embed-certs-467000" cluster
	I1204 13:05:14.069732    7282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:05:14.069748    7282 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:05:14.069755    7282 cache.go:56] Caching tarball of preloaded images
	I1204 13:05:14.069839    7282 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:05:14.069850    7282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:05:14.069907    7282 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/embed-certs-467000/config.json ...
	I1204 13:05:14.070550    7282 start.go:360] acquireMachinesLock for embed-certs-467000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:14.070611    7282 start.go:364] duration metric: took 53.5µs to acquireMachinesLock for "embed-certs-467000"
	I1204 13:05:14.070622    7282 start.go:96] Skipping create...Using existing machine configuration
	I1204 13:05:14.070628    7282 fix.go:54] fixHost starting: 
	I1204 13:05:14.070763    7282 fix.go:112] recreateIfNeeded on embed-certs-467000: state=Stopped err=<nil>
	W1204 13:05:14.070776    7282 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 13:05:14.079755    7282 out.go:177] * Restarting existing qemu2 VM for "embed-certs-467000" ...
	I1204 13:05:14.085351    7282 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:14.085394    7282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:62:32:8d:e0:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2
	I1204 13:05:14.087952    7282 main.go:141] libmachine: STDOUT: 
	I1204 13:05:14.087973    7282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:14.088011    7282 fix.go:56] duration metric: took 17.379709ms for fixHost
	I1204 13:05:14.088016    7282 start.go:83] releasing machines lock for "embed-certs-467000", held for 17.399167ms
	W1204 13:05:14.088024    7282 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:14.088066    7282 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:14.088071    7282 start.go:729] Will try again in 5 seconds ...
	I1204 13:05:19.090277    7282 start.go:360] acquireMachinesLock for embed-certs-467000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:21.264641    7282 start.go:364] duration metric: took 2.174292416s to acquireMachinesLock for "embed-certs-467000"
	I1204 13:05:21.264801    7282 start.go:96] Skipping create...Using existing machine configuration
	I1204 13:05:21.264822    7282 fix.go:54] fixHost starting: 
	I1204 13:05:21.265564    7282 fix.go:112] recreateIfNeeded on embed-certs-467000: state=Stopped err=<nil>
	W1204 13:05:21.265591    7282 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 13:05:21.294548    7282 out.go:177] * Restarting existing qemu2 VM for "embed-certs-467000" ...
	I1204 13:05:21.301468    7282 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:21.301720    7282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:62:32:8d:e0:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/embed-certs-467000/disk.qcow2
	I1204 13:05:21.311554    7282 main.go:141] libmachine: STDOUT: 
	I1204 13:05:21.311597    7282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:21.311666    7282 fix.go:56] duration metric: took 46.845333ms for fixHost
	I1204 13:05:21.311689    7282 start.go:83] releasing machines lock for "embed-certs-467000", held for 47.00975ms
	W1204 13:05:21.311919    7282 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-467000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-467000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:21.323301    7282 out.go:201] 
	W1204 13:05:21.327566    7282 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:21.327603    7282 out.go:270] * 
	* 
	W1204 13:05:21.330320    7282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:05:21.347852    7282 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-467000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000: exit status 7 (64.319167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-596000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-596000 create -f testdata/busybox.yaml: exit status 1 (31.73975ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-596000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-596000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000: exit status 7 (36.057416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-596000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000: exit status 7 (38.282125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-467000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000: exit status 7 (37.902875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-467000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-467000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-467000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.365917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-467000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-467000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000: exit status 7 (35.638583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-596000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-596000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-596000 describe deploy/metrics-server -n kube-system: exit status 1 (29.393084ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-596000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-596000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000: exit status 7 (40.156333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-467000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000: exit status 7 (34.34975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-467000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-467000 --alsologtostderr -v=1: exit status 83 (55.170167ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-467000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-467000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:05:21.655026    7320 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:05:21.655235    7320 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:21.655242    7320 out.go:358] Setting ErrFile to fd 2...
	I1204 13:05:21.655245    7320 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:21.655375    7320 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:05:21.655625    7320 out.go:352] Setting JSON to false
	I1204 13:05:21.655633    7320 mustload.go:65] Loading cluster: embed-certs-467000
	I1204 13:05:21.655870    7320 config.go:182] Loaded profile config "embed-certs-467000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:21.660146    7320 out.go:177] * The control-plane node embed-certs-467000 host is not running: state=Stopped
	I1204 13:05:21.667096    7320 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-467000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-467000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000: exit status 7 (34.550166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-467000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000: exit status 7 (32.027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-467000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-132000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-132000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.9848045s)

                                                
                                                
-- stdout --
	* [newest-cni-132000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-132000" primary control-plane node in "newest-cni-132000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-132000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:05:21.996848    7343 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:05:21.996984    7343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:21.996987    7343 out.go:358] Setting ErrFile to fd 2...
	I1204 13:05:21.996989    7343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:21.997119    7343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:05:21.998251    7343 out.go:352] Setting JSON to false
	I1204 13:05:22.016156    7343 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5693,"bootTime":1733340629,"procs":579,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:05:22.016234    7343 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:05:22.021131    7343 out.go:177] * [newest-cni-132000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:05:22.028189    7343 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:05:22.028229    7343 notify.go:220] Checking for updates...
	I1204 13:05:22.035180    7343 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:05:22.036642    7343 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:05:22.040088    7343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:05:22.043121    7343 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:05:22.046150    7343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:05:22.049492    7343 config.go:182] Loaded profile config "default-k8s-diff-port-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:22.049555    7343 config.go:182] Loaded profile config "multinode-729000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:22.049602    7343 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:05:22.054118    7343 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 13:05:22.061089    7343 start.go:297] selected driver: qemu2
	I1204 13:05:22.061097    7343 start.go:901] validating driver "qemu2" against <nil>
	I1204 13:05:22.061109    7343 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:05:22.063497    7343 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1204 13:05:22.063537    7343 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1204 13:05:22.071135    7343 out.go:177] * Automatically selected the socket_vmnet network
	I1204 13:05:22.074232    7343 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1204 13:05:22.074246    7343 cni.go:84] Creating CNI manager for ""
	I1204 13:05:22.074267    7343 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 13:05:22.074271    7343 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 13:05:22.074294    7343 start.go:340] cluster config:
	{Name:newest-cni-132000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-132000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:05:22.079058    7343 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:22.087114    7343 out.go:177] * Starting "newest-cni-132000" primary control-plane node in "newest-cni-132000" cluster
	I1204 13:05:22.091123    7343 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:05:22.091139    7343 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:05:22.091152    7343 cache.go:56] Caching tarball of preloaded images
	I1204 13:05:22.091247    7343 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:05:22.091252    7343 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:05:22.091312    7343 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/newest-cni-132000/config.json ...
	I1204 13:05:22.091322    7343 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/newest-cni-132000/config.json: {Name:mkdd36062f6e91c4e9ec221a648313c547dfd0c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 13:05:22.091741    7343 start.go:360] acquireMachinesLock for newest-cni-132000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:22.091789    7343 start.go:364] duration metric: took 41.458µs to acquireMachinesLock for "newest-cni-132000"
	I1204 13:05:22.091801    7343 start.go:93] Provisioning new machine with config: &{Name:newest-cni-132000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-132000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:05:22.091833    7343 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:05:22.100101    7343 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 13:05:22.117464    7343 start.go:159] libmachine.API.Create for "newest-cni-132000" (driver="qemu2")
	I1204 13:05:22.117488    7343 client.go:168] LocalClient.Create starting
	I1204 13:05:22.117557    7343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:05:22.117597    7343 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:22.117608    7343 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:22.117646    7343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:05:22.117677    7343 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:22.117685    7343 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:22.118045    7343 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:05:22.277766    7343 main.go:141] libmachine: Creating SSH key...
	I1204 13:05:22.465648    7343 main.go:141] libmachine: Creating Disk image...
	I1204 13:05:22.465656    7343 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:05:22.465910    7343 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2
	I1204 13:05:22.476171    7343 main.go:141] libmachine: STDOUT: 
	I1204 13:05:22.476195    7343 main.go:141] libmachine: STDERR: 
	I1204 13:05:22.476252    7343 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2 +20000M
	I1204 13:05:22.484730    7343 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:05:22.484745    7343 main.go:141] libmachine: STDERR: 
	I1204 13:05:22.484755    7343 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2
	I1204 13:05:22.484760    7343 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:05:22.484776    7343 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:22.484802    7343 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:e5:b7:e0:d2:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2
	I1204 13:05:22.486582    7343 main.go:141] libmachine: STDOUT: 
	I1204 13:05:22.486597    7343 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:22.486615    7343 client.go:171] duration metric: took 369.117708ms to LocalClient.Create
	I1204 13:05:24.488827    7343 start.go:128] duration metric: took 2.396937833s to createHost
	I1204 13:05:24.488893    7343 start.go:83] releasing machines lock for "newest-cni-132000", held for 2.397064375s
	W1204 13:05:24.488963    7343 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:24.501254    7343 out.go:177] * Deleting "newest-cni-132000" in qemu2 ...
	W1204 13:05:24.530968    7343 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:24.530997    7343 start.go:729] Will try again in 5 seconds ...
	I1204 13:05:29.533387    7343 start.go:360] acquireMachinesLock for newest-cni-132000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:29.533835    7343 start.go:364] duration metric: took 319.875µs to acquireMachinesLock for "newest-cni-132000"
	I1204 13:05:29.533960    7343 start.go:93] Provisioning new machine with config: &{Name:newest-cni-132000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-132000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 13:05:29.534254    7343 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 13:05:29.539903    7343 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 13:05:29.585777    7343 start.go:159] libmachine.API.Create for "newest-cni-132000" (driver="qemu2")
	I1204 13:05:29.585836    7343 client.go:168] LocalClient.Create starting
	I1204 13:05:29.585974    7343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/ca.pem
	I1204 13:05:29.586059    7343 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:29.586078    7343 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:29.586141    7343 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19985-1334/.minikube/certs/cert.pem
	I1204 13:05:29.586200    7343 main.go:141] libmachine: Decoding PEM data...
	I1204 13:05:29.586216    7343 main.go:141] libmachine: Parsing certificate...
	I1204 13:05:29.586898    7343 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 13:05:29.756146    7343 main.go:141] libmachine: Creating SSH key...
	I1204 13:05:29.864480    7343 main.go:141] libmachine: Creating Disk image...
	I1204 13:05:29.864486    7343 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 13:05:29.864684    7343 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2.raw /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2
	I1204 13:05:29.874793    7343 main.go:141] libmachine: STDOUT: 
	I1204 13:05:29.874812    7343 main.go:141] libmachine: STDERR: 
	I1204 13:05:29.874867    7343 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2 +20000M
	I1204 13:05:29.883282    7343 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 13:05:29.883300    7343 main.go:141] libmachine: STDERR: 
	I1204 13:05:29.883311    7343 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2
	I1204 13:05:29.883315    7343 main.go:141] libmachine: Starting QEMU VM...
	I1204 13:05:29.883323    7343 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:29.883366    7343 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:35:a7:61:cf:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2
	I1204 13:05:29.885082    7343 main.go:141] libmachine: STDOUT: 
	I1204 13:05:29.885099    7343 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:29.885112    7343 client.go:171] duration metric: took 299.266375ms to LocalClient.Create
	I1204 13:05:31.887311    7343 start.go:128] duration metric: took 2.352998083s to createHost
	I1204 13:05:31.887395    7343 start.go:83] releasing machines lock for "newest-cni-132000", held for 2.353508291s
	W1204 13:05:31.887851    7343 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-132000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-132000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:31.908525    7343 out.go:201] 
	W1204 13:05:31.912515    7343 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:31.912544    7343 out.go:270] * 
	* 
	W1204 13:05:31.915290    7343 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:05:31.924564    7343 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-132000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000: exit status 7 (70.006167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-596000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-596000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (6.536077917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-596000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-596000" primary control-plane node in "default-k8s-diff-port-596000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-596000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-596000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:05:25.462661    7371 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:05:25.462834    7371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:25.462838    7371 out.go:358] Setting ErrFile to fd 2...
	I1204 13:05:25.462840    7371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:25.462970    7371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:05:25.464068    7371 out.go:352] Setting JSON to false
	I1204 13:05:25.481731    7371 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5696,"bootTime":1733340629,"procs":579,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:05:25.481804    7371 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:05:25.487192    7371 out.go:177] * [default-k8s-diff-port-596000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:05:25.495236    7371 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:05:25.495280    7371 notify.go:220] Checking for updates...
	I1204 13:05:25.502224    7371 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:05:25.505137    7371 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:05:25.508186    7371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:05:25.509588    7371 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:05:25.512132    7371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:05:25.515516    7371 config.go:182] Loaded profile config "default-k8s-diff-port-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:25.515775    7371 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:05:25.517447    7371 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 13:05:25.524180    7371 start.go:297] selected driver: qemu2
	I1204 13:05:25.524188    7371 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-596000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:05:25.524247    7371 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:05:25.526761    7371 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 13:05:25.526787    7371 cni.go:84] Creating CNI manager for ""
	I1204 13:05:25.526807    7371 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 13:05:25.526831    7371 start.go:340] cluster config:
	{Name:default-k8s-diff-port-596000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-596000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:05:25.531101    7371 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:25.539156    7371 out.go:177] * Starting "default-k8s-diff-port-596000" primary control-plane node in "default-k8s-diff-port-596000" cluster
	I1204 13:05:25.540776    7371 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:05:25.540790    7371 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:05:25.540802    7371 cache.go:56] Caching tarball of preloaded images
	I1204 13:05:25.540867    7371 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:05:25.540873    7371 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:05:25.540930    7371 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/default-k8s-diff-port-596000/config.json ...
	I1204 13:05:25.541490    7371 start.go:360] acquireMachinesLock for default-k8s-diff-port-596000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:25.541542    7371 start.go:364] duration metric: took 44.833µs to acquireMachinesLock for "default-k8s-diff-port-596000"
	I1204 13:05:25.541553    7371 start.go:96] Skipping create...Using existing machine configuration
	I1204 13:05:25.541556    7371 fix.go:54] fixHost starting: 
	I1204 13:05:25.541675    7371 fix.go:112] recreateIfNeeded on default-k8s-diff-port-596000: state=Stopped err=<nil>
	W1204 13:05:25.541684    7371 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 13:05:25.546183    7371 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-596000" ...
	I1204 13:05:25.552141    7371 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:25.552173    7371 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:13:d2:3a:fd:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2
	I1204 13:05:25.554291    7371 main.go:141] libmachine: STDOUT: 
	I1204 13:05:25.554313    7371 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:25.554349    7371 fix.go:56] duration metric: took 12.7895ms for fixHost
	I1204 13:05:25.554355    7371 start.go:83] releasing machines lock for "default-k8s-diff-port-596000", held for 12.807542ms
	W1204 13:05:25.554361    7371 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:25.554410    7371 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:25.554415    7371 start.go:729] Will try again in 5 seconds ...
	I1204 13:05:30.556395    7371 start.go:360] acquireMachinesLock for default-k8s-diff-port-596000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:31.887634    7371 start.go:364] duration metric: took 1.331120167s to acquireMachinesLock for "default-k8s-diff-port-596000"
	I1204 13:05:31.887837    7371 start.go:96] Skipping create...Using existing machine configuration
	I1204 13:05:31.887859    7371 fix.go:54] fixHost starting: 
	I1204 13:05:31.888579    7371 fix.go:112] recreateIfNeeded on default-k8s-diff-port-596000: state=Stopped err=<nil>
	W1204 13:05:31.888607    7371 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 13:05:31.908525    7371 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-596000" ...
	I1204 13:05:31.912436    7371 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:31.912720    7371 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:13:d2:3a:fd:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/default-k8s-diff-port-596000/disk.qcow2
	I1204 13:05:31.922810    7371 main.go:141] libmachine: STDOUT: 
	I1204 13:05:31.922868    7371 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:31.922930    7371 fix.go:56] duration metric: took 35.07375ms for fixHost
	I1204 13:05:31.922949    7371 start.go:83] releasing machines lock for "default-k8s-diff-port-596000", held for 35.278625ms
	W1204 13:05:31.923157    7371 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-596000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-596000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:31.936432    7371 out.go:201] 
	W1204 13:05:31.940586    7371 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:31.940627    7371 out.go:270] * 
	* 
	W1204 13:05:31.943389    7371 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:05:31.954465    7371 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-596000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000: exit status 7 (59.933375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-596000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000: exit status 7 (40.904416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-596000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-596000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-596000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.314917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-596000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-596000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000: exit status 7 (40.185042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-596000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000: exit status 7 (33.158833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-596000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-596000 --alsologtostderr -v=1: exit status 83 (45.550916ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-596000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-596000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:05:32.243220    7406 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:05:32.243411    7406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:32.243414    7406 out.go:358] Setting ErrFile to fd 2...
	I1204 13:05:32.243417    7406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:32.243558    7406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:05:32.243780    7406 out.go:352] Setting JSON to false
	I1204 13:05:32.243789    7406 mustload.go:65] Loading cluster: default-k8s-diff-port-596000
	I1204 13:05:32.244021    7406 config.go:182] Loaded profile config "default-k8s-diff-port-596000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:32.248443    7406 out.go:177] * The control-plane node default-k8s-diff-port-596000 host is not running: state=Stopped
	I1204 13:05:32.252402    7406 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-596000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-596000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000: exit status 7 (33.421833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-596000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000: exit status 7 (32.945375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-596000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-132000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-132000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.190975917s)

                                                
                                                
-- stdout --
	* [newest-cni-132000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-132000" primary control-plane node in "newest-cni-132000" cluster
	* Restarting existing qemu2 VM for "newest-cni-132000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-132000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:05:35.800369    7443 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:05:35.800514    7443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:35.800517    7443 out.go:358] Setting ErrFile to fd 2...
	I1204 13:05:35.800520    7443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:35.800646    7443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:05:35.801716    7443 out.go:352] Setting JSON to false
	I1204 13:05:35.819614    7443 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5706,"bootTime":1733340629,"procs":575,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 13:05:35.819681    7443 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 13:05:35.824771    7443 out.go:177] * [newest-cni-132000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 13:05:35.831766    7443 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 13:05:35.831844    7443 notify.go:220] Checking for updates...
	I1204 13:05:35.839673    7443 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 13:05:35.842715    7443 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 13:05:35.846692    7443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 13:05:35.849731    7443 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 13:05:35.852706    7443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 13:05:35.856011    7443 config.go:182] Loaded profile config "newest-cni-132000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:35.856293    7443 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 13:05:35.860700    7443 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 13:05:35.867673    7443 start.go:297] selected driver: qemu2
	I1204 13:05:35.867679    7443 start.go:901] validating driver "qemu2" against &{Name:newest-cni-132000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-132000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:05:35.867723    7443 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 13:05:35.870269    7443 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1204 13:05:35.870292    7443 cni.go:84] Creating CNI manager for ""
	I1204 13:05:35.870316    7443 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 13:05:35.870344    7443 start.go:340] cluster config:
	{Name:newest-cni-132000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-132000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 13:05:35.874804    7443 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 13:05:35.881743    7443 out.go:177] * Starting "newest-cni-132000" primary control-plane node in "newest-cni-132000" cluster
	I1204 13:05:35.884673    7443 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 13:05:35.884701    7443 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 13:05:35.884713    7443 cache.go:56] Caching tarball of preloaded images
	I1204 13:05:35.884798    7443 preload.go:172] Found /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 13:05:35.884804    7443 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 13:05:35.884873    7443 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/newest-cni-132000/config.json ...
	I1204 13:05:35.885396    7443 start.go:360] acquireMachinesLock for newest-cni-132000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:35.885449    7443 start.go:364] duration metric: took 46.583µs to acquireMachinesLock for "newest-cni-132000"
	I1204 13:05:35.885458    7443 start.go:96] Skipping create...Using existing machine configuration
	I1204 13:05:35.885463    7443 fix.go:54] fixHost starting: 
	I1204 13:05:35.885596    7443 fix.go:112] recreateIfNeeded on newest-cni-132000: state=Stopped err=<nil>
	W1204 13:05:35.885605    7443 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 13:05:35.889712    7443 out.go:177] * Restarting existing qemu2 VM for "newest-cni-132000" ...
	I1204 13:05:35.897656    7443 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:35.897690    7443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:35:a7:61:cf:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2
	I1204 13:05:35.900014    7443 main.go:141] libmachine: STDOUT: 
	I1204 13:05:35.900033    7443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:35.900063    7443 fix.go:56] duration metric: took 14.597584ms for fixHost
	I1204 13:05:35.900069    7443 start.go:83] releasing machines lock for "newest-cni-132000", held for 14.615583ms
	W1204 13:05:35.900077    7443 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:35.900133    7443 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:35.900138    7443 start.go:729] Will try again in 5 seconds ...
	I1204 13:05:40.900938    7443 start.go:360] acquireMachinesLock for newest-cni-132000: {Name:mk84bd639b4e5a8c4cdfeaa9bee1047023ab4df8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 13:05:40.901391    7443 start.go:364] duration metric: took 344.416µs to acquireMachinesLock for "newest-cni-132000"
	I1204 13:05:40.901523    7443 start.go:96] Skipping create...Using existing machine configuration
	I1204 13:05:40.901543    7443 fix.go:54] fixHost starting: 
	I1204 13:05:40.902242    7443 fix.go:112] recreateIfNeeded on newest-cni-132000: state=Stopped err=<nil>
	W1204 13:05:40.902269    7443 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 13:05:40.910929    7443 out.go:177] * Restarting existing qemu2 VM for "newest-cni-132000" ...
	I1204 13:05:40.913957    7443 qemu.go:418] Using hvf for hardware acceleration
	I1204 13:05:40.914183    7443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:35:a7:61:cf:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19985-1334/.minikube/machines/newest-cni-132000/disk.qcow2
	I1204 13:05:40.924494    7443 main.go:141] libmachine: STDOUT: 
	I1204 13:05:40.924552    7443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 13:05:40.924632    7443 fix.go:56] duration metric: took 23.0915ms for fixHost
	I1204 13:05:40.924653    7443 start.go:83] releasing machines lock for "newest-cni-132000", held for 23.239083ms
	W1204 13:05:40.924826    7443 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-132000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-132000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 13:05:40.931893    7443 out.go:201] 
	W1204 13:05:40.936044    7443 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 13:05:40.936066    7443 out.go:270] * 
	* 
	W1204 13:05:40.938553    7443 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 13:05:40.945899    7443 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-132000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000: exit status 7 (73.324833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-132000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000: exit status 7 (34.201083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-132000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-132000 --alsologtostderr -v=1: exit status 83 (46.374833ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-132000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-132000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 13:05:41.143916    7461 out.go:345] Setting OutFile to fd 1 ...
	I1204 13:05:41.144131    7461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:41.144134    7461 out.go:358] Setting ErrFile to fd 2...
	I1204 13:05:41.144136    7461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 13:05:41.144274    7461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 13:05:41.144522    7461 out.go:352] Setting JSON to false
	I1204 13:05:41.144534    7461 mustload.go:65] Loading cluster: newest-cni-132000
	I1204 13:05:41.144762    7461 config.go:182] Loaded profile config "newest-cni-132000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 13:05:41.149721    7461 out.go:177] * The control-plane node newest-cni-132000 host is not running: state=Stopped
	I1204 13:05:41.153867    7461 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-132000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-132000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000: exit status 7 (34.029375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-132000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000: exit status 7 (34.642042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-132000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                    

Test pass (153/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.2/json-events 13.3
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.08
18 TestDownloadOnly/v1.31.2/DeleteAll 0.12
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 197.48
29 TestAddons/serial/Volcano 40.04
31 TestAddons/serial/GCPAuth/Namespaces 0.08
32 TestAddons/serial/GCPAuth/FakeCredentials 8.37
35 TestAddons/parallel/Registry 13.72
36 TestAddons/parallel/Ingress 18.54
37 TestAddons/parallel/InspektorGadget 10.29
38 TestAddons/parallel/MetricsServer 5.32
40 TestAddons/parallel/CSI 36.65
41 TestAddons/parallel/Headlamp 15.6
42 TestAddons/parallel/CloudSpanner 5.21
43 TestAddons/parallel/LocalPath 52.06
44 TestAddons/parallel/NvidiaDevicePlugin 6.16
45 TestAddons/parallel/Yakd 10.29
47 TestAddons/StoppedEnableDisable 12.43
55 TestHyperKitDriverInstallOrUpdate 11.36
58 TestErrorSpam/setup 35.33
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.26
61 TestErrorSpam/pause 0.66
62 TestErrorSpam/unpause 0.6
63 TestErrorSpam/stop 55.27
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 47.51
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 38.19
70 TestFunctional/serial/KubeContext 0.03
71 TestFunctional/serial/KubectlGetPods 0.04
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.22
75 TestFunctional/serial/CacheCmd/cache/add_local 1.28
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
77 TestFunctional/serial/CacheCmd/cache/list 0.04
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
79 TestFunctional/serial/CacheCmd/cache/cache_reload 0.69
80 TestFunctional/serial/CacheCmd/cache/delete 0.08
81 TestFunctional/serial/MinikubeKubectlCmd 0.77
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.14
83 TestFunctional/serial/ExtraConfig 36.22
84 TestFunctional/serial/ComponentHealth 0.04
85 TestFunctional/serial/LogsCmd 0.65
86 TestFunctional/serial/LogsFileCmd 0.62
87 TestFunctional/serial/InvalidService 4.09
89 TestFunctional/parallel/ConfigCmd 0.25
90 TestFunctional/parallel/DashboardCmd 10.05
91 TestFunctional/parallel/DryRun 0.25
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.27
98 TestFunctional/parallel/AddonsCmd 0.11
99 TestFunctional/parallel/PersistentVolumeClaim 25.39
101 TestFunctional/parallel/SSHCmd 0.14
102 TestFunctional/parallel/CpCmd 0.48
104 TestFunctional/parallel/FileSync 0.07
105 TestFunctional/parallel/CertSync 0.41
109 TestFunctional/parallel/NodeLabels 0.04
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
113 TestFunctional/parallel/License 0.32
114 TestFunctional/parallel/Version/short 0.04
115 TestFunctional/parallel/Version/components 0.28
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
120 TestFunctional/parallel/ImageCommands/ImageBuild 1.89
121 TestFunctional/parallel/ImageCommands/Setup 1.97
122 TestFunctional/parallel/DockerEnv/bash 0.29
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.5
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.42
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.11
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.27
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
144 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
145 TestFunctional/parallel/ServiceCmd/List 0.32
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
148 TestFunctional/parallel/ServiceCmd/Format 0.1
149 TestFunctional/parallel/ServiceCmd/URL 0.1
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.15
151 TestFunctional/parallel/ProfileCmd/profile_list 0.14
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
153 TestFunctional/parallel/MountCmd/any-port 5.22
154 TestFunctional/parallel/MountCmd/specific-port 0.96
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.11
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.01
168 TestMultiControlPlane/serial/CopyFile 0.03
176 TestImageBuild/serial/Setup 34.8
177 TestImageBuild/serial/NormalBuild 1.38
178 TestImageBuild/serial/BuildWithBuildArg 0.43
179 TestImageBuild/serial/BuildWithDockerIgnore 0.34
180 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.32
185 TestJSONOutput/start/Audit 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 4.88
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.22
212 TestMainNoArgs 0.04
213 TestMinikubeProfile 71.72
259 TestStoppedBinaryUpgrade/Setup 1.21
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
276 TestNoKubernetes/serial/ProfileList 31.47
277 TestNoKubernetes/serial/Stop 3.59
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
294 TestStartStop/group/old-k8s-version/serial/Stop 1.81
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
305 TestStartStop/group/no-preload/serial/Stop 3.37
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.14
316 TestStartStop/group/embed-certs/serial/Stop 2.77
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.64
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
336 TestStartStop/group/newest-cni/serial/Stop 3.54
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1204 11:52:15.621514    1856 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1204 11:52:15.621974    1856 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-612000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-612000: exit status 85 (100.576375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-612000 | jenkins | v1.34.0 | 04 Dec 24 11:51 PST |          |
	|         | -p download-only-612000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 11:51:49
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 11:51:49.922492    1857 out.go:345] Setting OutFile to fd 1 ...
	I1204 11:51:49.922681    1857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 11:51:49.922685    1857 out.go:358] Setting ErrFile to fd 2...
	I1204 11:51:49.922687    1857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 11:51:49.922820    1857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	W1204 11:51:49.922895    1857 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19985-1334/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19985-1334/.minikube/config/config.json: no such file or directory
	I1204 11:51:49.924311    1857 out.go:352] Setting JSON to true
	I1204 11:51:49.943735    1857 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1280,"bootTime":1733340629,"procs":580,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 11:51:49.943816    1857 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 11:51:49.949248    1857 out.go:97] [download-only-612000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 11:51:49.949396    1857 notify.go:220] Checking for updates...
	W1204 11:51:49.949464    1857 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball: no such file or directory
	I1204 11:51:49.952177    1857 out.go:169] MINIKUBE_LOCATION=19985
	I1204 11:51:49.953796    1857 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 11:51:49.958238    1857 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 11:51:49.962248    1857 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 11:51:49.965237    1857 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	W1204 11:51:49.971233    1857 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 11:51:49.971493    1857 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 11:51:49.975123    1857 out.go:97] Using the qemu2 driver based on user configuration
	I1204 11:51:49.975147    1857 start.go:297] selected driver: qemu2
	I1204 11:51:49.975163    1857 start.go:901] validating driver "qemu2" against <nil>
	I1204 11:51:49.975268    1857 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 11:51:49.979134    1857 out.go:169] Automatically selected the socket_vmnet network
	I1204 11:51:49.984999    1857 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1204 11:51:49.985093    1857 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 11:51:49.985134    1857 cni.go:84] Creating CNI manager for ""
	I1204 11:51:49.985181    1857 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1204 11:51:49.985246    1857 start.go:340] cluster config:
	{Name:download-only-612000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-612000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 11:51:49.989904    1857 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 11:51:49.993181    1857 out.go:97] Downloading VM boot image ...
	I1204 11:51:49.993195    1857 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1204 11:51:59.549400    1857 out.go:97] Starting "download-only-612000" primary control-plane node in "download-only-612000" cluster
	I1204 11:51:59.549426    1857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 11:51:59.627372    1857 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 11:51:59.627380    1857 cache.go:56] Caching tarball of preloaded images
	I1204 11:51:59.627650    1857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 11:51:59.631852    1857 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1204 11:51:59.631861    1857 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 11:51:59.729244    1857 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 11:52:14.195249    1857 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 11:52:14.195442    1857 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 11:52:14.912437    1857 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1204 11:52:14.912653    1857 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/download-only-612000/config.json ...
	I1204 11:52:14.912670    1857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/download-only-612000/config.json: {Name:mka66230f231944a3fd443dbe207fab79dc8531f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 11:52:14.912972    1857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 11:52:14.913221    1857 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1204 11:52:15.569128    1857 out.go:193] 
	W1204 11:52:15.575165    1857 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19985-1334/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109db8320 0x109db8320 0x109db8320 0x109db8320 0x109db8320 0x109db8320 0x109db8320] Decompressors:map[bz2:0x14000693030 gz:0x14000693038 tar:0x14000692f80 tar.bz2:0x14000692f90 tar.gz:0x14000692fa0 tar.xz:0x14000692ff0 tar.zst:0x14000693010 tbz2:0x14000692f90 tgz:0x14000692fa0 txz:0x14000692ff0 tzst:0x14000693010 xz:0x14000693060 zip:0x14000693070 zst:0x14000693068] Getters:map[file:0x140018c4840 http:0x1400091a0a0 https:0x1400091a0f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1204 11:52:15.575203    1857 out_reason.go:110] 
	W1204 11:52:15.584137    1857 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 11:52:15.589102    1857 out.go:193] 
	
	
	* The control-plane node download-only-612000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-612000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-612000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (13.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-460000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-460000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (13.302858625s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (13.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1204 11:52:29.302611    1856 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1204 11:52:29.302665    1856 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-460000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-460000: exit status 85 (83.200833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-612000 | jenkins | v1.34.0 | 04 Dec 24 11:51 PST |                     |
	|         | -p download-only-612000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Dec 24 11:52 PST | 04 Dec 24 11:52 PST |
	| delete  | -p download-only-612000        | download-only-612000 | jenkins | v1.34.0 | 04 Dec 24 11:52 PST | 04 Dec 24 11:52 PST |
	| start   | -o=json --download-only        | download-only-460000 | jenkins | v1.34.0 | 04 Dec 24 11:52 PST |                     |
	|         | -p download-only-460000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 11:52:16
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 11:52:16.030687    1884 out.go:345] Setting OutFile to fd 1 ...
	I1204 11:52:16.030846    1884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 11:52:16.030849    1884 out.go:358] Setting ErrFile to fd 2...
	I1204 11:52:16.030852    1884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 11:52:16.030967    1884 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 11:52:16.032166    1884 out.go:352] Setting JSON to true
	I1204 11:52:16.050155    1884 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1307,"bootTime":1733340629,"procs":578,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 11:52:16.050232    1884 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 11:52:16.054669    1884 out.go:97] [download-only-460000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 11:52:16.054760    1884 notify.go:220] Checking for updates...
	I1204 11:52:16.058689    1884 out.go:169] MINIKUBE_LOCATION=19985
	I1204 11:52:16.061693    1884 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 11:52:16.065714    1884 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 11:52:16.068687    1884 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 11:52:16.072535    1884 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	W1204 11:52:16.079679    1884 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 11:52:16.079875    1884 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 11:52:16.081341    1884 out.go:97] Using the qemu2 driver based on user configuration
	I1204 11:52:16.081350    1884 start.go:297] selected driver: qemu2
	I1204 11:52:16.081353    1884 start.go:901] validating driver "qemu2" against <nil>
	I1204 11:52:16.081401    1884 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 11:52:16.084614    1884 out.go:169] Automatically selected the socket_vmnet network
	I1204 11:52:16.090962    1884 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1204 11:52:16.091054    1884 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 11:52:16.091072    1884 cni.go:84] Creating CNI manager for ""
	I1204 11:52:16.091102    1884 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 11:52:16.091107    1884 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 11:52:16.091153    1884 start.go:340] cluster config:
	{Name:download-only-460000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-460000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 11:52:16.095410    1884 iso.go:125] acquiring lock: {Name:mkd0f8b7b77d94b51ab9000e7348200f036cc5c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 11:52:16.096682    1884 out.go:97] Starting "download-only-460000" primary control-plane node in "download-only-460000" cluster
	I1204 11:52:16.096688    1884 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 11:52:16.159719    1884 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 11:52:16.159745    1884 cache.go:56] Caching tarball of preloaded images
	I1204 11:52:16.159986    1884 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 11:52:16.164210    1884 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1204 11:52:16.164221    1884 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1204 11:52:16.255996    1884 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 11:52:24.208095    1884 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1204 11:52:24.208255    1884 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1204 11:52:24.741623    1884 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 11:52:24.741819    1884 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/download-only-460000/config.json ...
	I1204 11:52:24.741840    1884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/download-only-460000/config.json: {Name:mk432e82e195d2396c6068a2abbc56e3086ee341 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 11:52:24.742154    1884 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 11:52:24.742317    1884 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19985-1334/.minikube/cache/darwin/arm64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-460000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-460000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-460000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
I1204 11:52:29.838937    1856 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-339000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-339000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-339000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-089000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-089000: exit status 85 (65.928875ms)

                                                
                                                
-- stdout --
	* Profile "addons-089000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-089000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-089000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-089000: exit status 85 (62.123625ms)

                                                
                                                
-- stdout --
	* Profile "addons-089000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-089000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (197.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-089000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-089000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m17.475516042s)
--- PASS: TestAddons/Setup (197.48s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.04s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 7.086292ms
addons_test.go:823: volcano-controller stabilized in 7.103792ms
addons_test.go:807: volcano-scheduler stabilized in 7.12925ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-67pm8" [cbe6febb-44ee-4777-95e2-db1ed12e09bf] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.0092495s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-t96sb" [dd0aa297-a20c-4f67-b50c-be025aae3a45] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005754292s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-rq8mg" [b870c84b-1cec-432e-9845-8b8e4c2eb5f1] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.006213833s
addons_test.go:842: (dbg) Run:  kubectl --context addons-089000 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-089000 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-089000 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [9ea62135-a211-4229-974b-1fedab778cbb] Pending
helpers_test.go:344: "test-job-nginx-0" [9ea62135-a211-4229-974b-1fedab778cbb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [9ea62135-a211-4229-974b-1fedab778cbb] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.007189792s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-089000 addons disable volcano --alsologtostderr -v=1: (10.786869167s)
--- PASS: TestAddons/serial/Volcano (40.04s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-089000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-089000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.37s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-089000 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-089000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e59ad387-259e-47c9-98fd-8c82bb48aa1f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e59ad387-259e-47c9-98fd-8c82bb48aa1f] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004384709s
addons_test.go:633: (dbg) Run:  kubectl --context addons-089000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-089000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-089000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-089000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.37s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 1.395375ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-wzp8v" [19dff5b9-4c21-4ac6-adc1-e1dfc61dbb6a] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005921416s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nb6zx" [91af5d43-4cf7-4708-b240-bbe0949c78a7] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004922791s
addons_test.go:331: (dbg) Run:  kubectl --context addons-089000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-089000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-089000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.415486208s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 ip
2024/12/04 11:56:58 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-089000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-089000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-089000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [677bb4a4-2c31-48a9-81f6-0a33bcf68507] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [677bb4a4-2c31-48a9-81f6-0a33bcf68507] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.010966542s
I1204 11:58:09.110201    1856 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-089000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-089000 addons disable ingress --alsologtostderr -v=1: (7.279834959s)
--- PASS: TestAddons/parallel/Ingress (18.54s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dbrd9" [164d705e-4f3b-4030-aebd-5701323e1670] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003852375s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-089000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.2880115s)
--- PASS: TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.543125ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-wc2gh" [c5698041-5306-49a1-ba90-6eef1adff6c7] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010157125s
addons_test.go:402: (dbg) Run:  kubectl --context addons-089000 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (36.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1204 11:57:19.509832    1856 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1204 11:57:19.512445    1856 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1204 11:57:19.512453    1856 kapi.go:107] duration metric: took 2.644417ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 2.648042ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-089000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-089000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4d10099d-2300-4d4f-8f20-738a4a1da417] Pending
helpers_test.go:344: "task-pv-pod" [4d10099d-2300-4d4f-8f20-738a4a1da417] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4d10099d-2300-4d4f-8f20-738a4a1da417] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.010923917s
addons_test.go:511: (dbg) Run:  kubectl --context addons-089000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-089000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-089000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-089000 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-089000 delete pod task-pv-pod: (1.140993625s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-089000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-089000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-089000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3366b432-660b-4aac-b2e7-0fada2a7e07d] Pending
helpers_test.go:344: "task-pv-pod-restore" [3366b432-660b-4aac-b2e7-0fada2a7e07d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3366b432-660b-4aac-b2e7-0fada2a7e07d] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00537025s
addons_test.go:553: (dbg) Run:  kubectl --context addons-089000 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-089000 delete pod task-pv-pod-restore: (1.003011375s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-089000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-089000 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-089000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.092256709s)
--- PASS: TestAddons/parallel/CSI (36.65s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-089000 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-tzhw5" [2cdf75a9-d9ff-494c-b586-9fb17927119a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-tzhw5" [2cdf75a9-d9ff-494c-b586-9fb17927119a] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004950958s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-089000 addons disable headlamp --alsologtostderr -v=1: (5.262830375s)
--- PASS: TestAddons/parallel/Headlamp (15.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-s7rq9" [11fe7859-aa27-4245-8abf-a87cdee3ece5] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.041877375s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-089000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-089000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-089000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3337a1e4-0984-4ecf-aa2c-9cd86987a53c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3337a1e4-0984-4ecf-aa2c-9cd86987a53c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3337a1e4-0984-4ecf-aa2c-9cd86987a53c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004376875s
addons_test.go:906: (dbg) Run:  kubectl --context addons-089000 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 ssh "cat /opt/local-path-provisioner/pvc-54700e53-bba0-4063-9926-cff8c74a52ea_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-089000 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-089000 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-089000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.564685916s)
--- PASS: TestAddons/parallel/LocalPath (52.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vlp9b" [2ff58d0f-8b45-48e1-8fc8-26b495231dc8] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005561375s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-qrsph" [1ae27763-9d64-4e0a-ba57-ba61ceaa64e6] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005554417s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-089000 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-089000 addons disable yakd --alsologtostderr -v=1: (5.287692542s)
--- PASS: TestAddons/parallel/Yakd (10.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-089000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-089000: (12.224597833s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-089000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-089000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-089000
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.36s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1204 12:50:56.472039    1856 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1204 12:50:56.472253    1856 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1204 12:50:59.145238    1856 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1204 12:50:59.145440    1856 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1204 12:50:59.145498    1856 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/001/docker-machine-driver-hyperkit
I1204 12:50:59.683649    1856 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1050316e0 0x1050316e0 0x1050316e0 0x1050316e0 0x1050316e0 0x1050316e0 0x1050316e0] Decompressors:map[bz2:0x140006815f0 gz:0x140006815f8 tar:0x14000681590 tar.bz2:0x140006815b0 tar.gz:0x140006815c0 tar.xz:0x140006815d0 tar.zst:0x140006815e0 tbz2:0x140006815b0 tgz:0x140006815c0 txz:0x140006815d0 tzst:0x140006815e0 xz:0x14000681600 zip:0x14000681610 zst:0x14000681608] Getters:map[file:0x14001c59270 http:0x14000911220 https:0x14000911270] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1204 12:50:59.683675    1856 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate3690510839/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.36s)

                                                
                                    
x
+
TestErrorSpam/setup (35.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-724000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-724000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 --driver=qemu2 : (35.333955042s)
--- PASS: TestErrorSpam/setup (35.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 pause
--- PASS: TestErrorSpam/pause (0.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 unpause
--- PASS: TestErrorSpam/unpause (0.60s)

                                                
                                    
x
+
TestErrorSpam/stop (55.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 stop: (3.198375667s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 stop: (26.036562958s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-724000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-724000 stop: (26.032499542s)
--- PASS: TestErrorSpam/stop (55.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19985-1334/.minikube/files/etc/test/nested/copy/1856/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.51s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-306000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E1204 12:00:47.754070    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:00:47.761722    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:00:47.775094    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:00:47.798503    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:00:47.842004    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:00:47.925464    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:00:48.088973    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:00:48.412562    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:00:49.054579    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-306000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (47.51272275s)
--- PASS: TestFunctional/serial/StartWithProxy (47.51s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1204 12:00:50.128872    1856 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-306000 --alsologtostderr -v=8
E1204 12:00:50.338168    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:00:52.802796    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:00:57.926532    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
E1204 12:01:08.170308    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-306000 --alsologtostderr -v=8: (38.1877735s)
functional_test.go:663: soft start took 38.188210792s for "functional-306000" cluster.
I1204 12:01:28.217054    1856 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (38.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-306000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 cache add registry.k8s.io/pause:3.1
E1204 12:01:28.652314    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-306000 cache add registry.k8s.io/pause:3.1: (1.231232125s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-306000 cache add registry.k8s.io/pause:3.3: (1.101774833s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4021372727/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 cache add minikube-local-cache-test:functional-306000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 cache delete minikube-local-cache-test:functional-306000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-306000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-306000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (70.482666ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 kubectl -- --context functional-306000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.77s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-306000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-306000 get pods: (1.13908675s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-306000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1204 12:02:09.615436    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-306000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.214842625s)
functional_test.go:761: restart took 36.214944166s for "functional-306000" cluster.
I1204 12:02:11.838586    1856 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (36.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-306000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2716232403/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-306000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-306000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-306000: exit status 115 (153.726792ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30372 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-306000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-306000 config get cpus: exit status 14 (33.375208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-306000 config get cpus: exit status 14 (38.53625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-306000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-306000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2973: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-306000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-306000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (127.68275ms)

                                                
                                                
-- stdout --
	* [functional-306000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:03:03.624967    2947 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:03:03.625136    2947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:03:03.625142    2947 out.go:358] Setting ErrFile to fd 2...
	I1204 12:03:03.625144    2947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:03:03.625275    2947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:03:03.626654    2947 out.go:352] Setting JSON to false
	I1204 12:03:03.645994    2947 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1954,"bootTime":1733340629,"procs":580,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:03:03.646064    2947 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:03:03.649303    2947 out.go:177] * [functional-306000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 12:03:03.656339    2947 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:03:03.656450    2947 notify.go:220] Checking for updates...
	I1204 12:03:03.663273    2947 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:03:03.667316    2947 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:03:03.670212    2947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:03:03.676302    2947 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:03:03.680263    2947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:03:03.683617    2947 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:03:03.683873    2947 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:03:03.688231    2947 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 12:03:03.696330    2947 start.go:297] selected driver: qemu2
	I1204 12:03:03.696346    2947 start.go:901] validating driver "qemu2" against &{Name:functional-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:03:03.696409    2947 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:03:03.703093    2947 out.go:201] 
	W1204 12:03:03.707282    2947 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1204 12:03:03.708889    2947 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-306000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-306000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-306000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.326166ms)

                                                
                                                
-- stdout --
	* [functional-306000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 12:03:03.499370    2940 out.go:345] Setting OutFile to fd 1 ...
	I1204 12:03:03.499527    2940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:03:03.499531    2940 out.go:358] Setting ErrFile to fd 2...
	I1204 12:03:03.499533    2940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 12:03:03.499658    2940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
	I1204 12:03:03.501280    2940 out.go:352] Setting JSON to false
	I1204 12:03:03.523605    2940 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1954,"bootTime":1733340629,"procs":584,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 12:03:03.523737    2940 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 12:03:03.527228    2940 out.go:177] * [functional-306000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1204 12:03:03.534438    2940 notify.go:220] Checking for updates...
	I1204 12:03:03.537303    2940 out.go:177]   - MINIKUBE_LOCATION=19985
	I1204 12:03:03.541313    2940 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	I1204 12:03:03.542405    2940 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 12:03:03.545351    2940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 12:03:03.548308    2940 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	I1204 12:03:03.551343    2940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 12:03:03.554744    2940 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 12:03:03.555037    2940 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 12:03:03.558396    2940 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1204 12:03:03.565230    2940 start.go:297] selected driver: qemu2
	I1204 12:03:03.565240    2940 start.go:901] validating driver "qemu2" against &{Name:functional-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-306000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 12:03:03.565313    2940 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 12:03:03.572256    2940 out.go:201] 
	W1204 12:03:03.576281    2940 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1204 12:03:03.580254    2940 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a7d371d3-e7b9-41ee-889a-547c288b743d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007961833s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-306000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-306000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-306000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-306000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dc361b84-ac5a-4390-bef2-0f39faa4624f] Pending
helpers_test.go:344: "sp-pod" [dc361b84-ac5a-4390-bef2-0f39faa4624f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dc361b84-ac5a-4390-bef2-0f39faa4624f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.005502708s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-306000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-306000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-306000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f19f4dd8-2d80-44f9-b2ba-7f1d24967d19] Pending
helpers_test.go:344: "sp-pod" [f19f4dd8-2d80-44f9-b2ba-7f1d24967d19] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f19f4dd8-2d80-44f9-b2ba-7f1d24967d19] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00666675s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-306000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.39s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh -n functional-306000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 cp functional-306000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1775613825/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh -n functional-306000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh -n functional-306000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1856/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "sudo cat /etc/test/nested/copy/1856/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1856.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "sudo cat /etc/ssl/certs/1856.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1856.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "sudo cat /usr/share/ca-certificates/1856.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/18562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "sudo cat /etc/ssl/certs/18562.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/18562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "sudo cat /usr/share/ca-certificates/18562.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-306000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-306000 ssh "sudo systemctl is-active crio": exit status 1 (119.973584ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-306000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-306000
docker.io/kicbase/echo-server:functional-306000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-306000 image ls --format short --alsologtostderr:
I1204 12:03:04.398723    2974 out.go:345] Setting OutFile to fd 1 ...
I1204 12:03:04.399108    2974 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:03:04.399113    2974 out.go:358] Setting ErrFile to fd 2...
I1204 12:03:04.399116    2974 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:03:04.399258    2974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
I1204 12:03:04.399708    2974 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:03:04.399770    2974 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:03:04.400683    2974 ssh_runner.go:195] Run: systemctl --version
I1204 12:03:04.400695    2974 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/functional-306000/id_rsa Username:docker}
I1204 12:03:04.424669    2974 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-306000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.2           | 9404aea098d9e | 85.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.2           | 021d242013305 | 94.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/library/nginx                     | alpine            | dba92e6b64886 | 56.9MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kicbase/echo-server               | functional-306000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-scheduler              | v1.31.2           | d6b061e73ae45 | 66MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| localhost/my-image                          | functional-306000 | a30a2ff9bc4fa | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-306000 | f42dfbcd8f1fe | 30B    |
| docker.io/library/nginx                     | latest            | bdf62fd3a32f1 | 197MB  |
| registry.k8s.io/kube-apiserver              | v1.31.2           | f9c26480f1e72 | 91.6MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-306000 image ls --format table --alsologtostderr:
I1204 12:03:06.509348    2986 out.go:345] Setting OutFile to fd 1 ...
I1204 12:03:06.509545    2986 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:03:06.509549    2986 out.go:358] Setting ErrFile to fd 2...
I1204 12:03:06.509551    2986 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:03:06.509680    2986 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
I1204 12:03:06.510208    2986 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:03:06.510269    2986 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:03:06.511146    2986 ssh_runner.go:195] Run: systemctl --version
I1204 12:03:06.511156    2986 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/functional-306000/id_rsa Username:docker}
I1204 12:03:06.537124    2986 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/12/04 12:03:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-306000 image ls --format json --alsologtostderr:
[{"id":"f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"91600000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-306000"],"size":"4780000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"a30a2ff9bc4fa4d43eef15efb9c0966f6666d95d8faf1d581b8ef4f77a4cc486","repoDigests":[],"repoTags":["localhost/my-image:functional-306000"],"size":"1410000"},{"id":"021d2420133054f8835987db659750ff639ab686377
6460264dd8025c06644ba","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"94700000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"85900000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"f42dfbcd8f1fe3b387f0fe4e952cde77f588aa96bae4e79373e014f80a3ec14c","repoDigests":[],"repoTags":["docker.io/library/mini
kube-local-cache-test:functional-306000"],"size":"30"},{"id":"bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"56900000"},{"id":"d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-306000 image ls --format json --alsologtostderr:
I1204 12:03:06.433733    2984 out.go:345] Setting OutFile to fd 1 ...
I1204 12:03:06.433916    2984 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:03:06.433920    2984 out.go:358] Setting ErrFile to fd 2...
I1204 12:03:06.433922    2984 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:03:06.434060    2984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
I1204 12:03:06.434481    2984 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:03:06.434542    2984 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:03:06.435458    2984 ssh_runner.go:195] Run: systemctl --version
I1204 12:03:06.435468    2984 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/functional-306000/id_rsa Username:docker}
I1204 12:03:06.461022    2984 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-306000 image ls --format yaml --alsologtostderr:
- id: bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "91600000"
- id: d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "66000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-306000
size: "4780000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: f42dfbcd8f1fe3b387f0fe4e952cde77f588aa96bae4e79373e014f80a3ec14c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-306000
size: "30"
- id: dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "56900000"
- id: 9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "85900000"
- id: 021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "94700000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-306000 image ls --format yaml --alsologtostderr:
I1204 12:03:04.471469    2976 out.go:345] Setting OutFile to fd 1 ...
I1204 12:03:04.471689    2976 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:03:04.471693    2976 out.go:358] Setting ErrFile to fd 2...
I1204 12:03:04.471695    2976 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:03:04.471824    2976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
I1204 12:03:04.472274    2976 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:03:04.472333    2976 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:03:04.473121    2976 ssh_runner.go:195] Run: systemctl --version
I1204 12:03:04.473129    2976 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/functional-306000/id_rsa Username:docker}
I1204 12:03:04.497296    2976 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-306000 ssh pgrep buildkitd: exit status 1 (64.991584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image build -t localhost/my-image:functional-306000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-306000 image build -t localhost/my-image:functional-306000 testdata/build --alsologtostderr: (1.743884042s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-306000 image build -t localhost/my-image:functional-306000 testdata/build --alsologtostderr:
I1204 12:03:04.613240    2980 out.go:345] Setting OutFile to fd 1 ...
I1204 12:03:04.613623    2980 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:03:04.613627    2980 out.go:358] Setting ErrFile to fd 2...
I1204 12:03:04.613629    2980 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 12:03:04.613783    2980 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19985-1334/.minikube/bin
I1204 12:03:04.614355    2980 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:03:04.620428    2980 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 12:03:04.621359    2980 ssh_runner.go:195] Run: systemctl --version
I1204 12:03:04.621370    2980 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19985-1334/.minikube/machines/functional-306000/id_rsa Username:docker}
I1204 12:03:04.648007    2980 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.254238997.tar
I1204 12:03:04.648103    2980 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1204 12:03:04.653162    2980 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.254238997.tar
I1204 12:03:04.655065    2980 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.254238997.tar: stat -c "%s %y" /var/lib/minikube/build/build.254238997.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.254238997.tar': No such file or directory
I1204 12:03:04.655088    2980 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.254238997.tar --> /var/lib/minikube/build/build.254238997.tar (3072 bytes)
I1204 12:03:04.668205    2980 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.254238997
I1204 12:03:04.676985    2980 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.254238997 -xf /var/lib/minikube/build/build.254238997.tar
I1204 12:03:04.684291    2980 docker.go:360] Building image: /var/lib/minikube/build/build.254238997
I1204 12:03:04.684362    2980 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-306000 /var/lib/minikube/build/build.254238997
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:a30a2ff9bc4fa4d43eef15efb9c0966f6666d95d8faf1d581b8ef4f77a4cc486 done
#8 naming to localhost/my-image:functional-306000 done
#8 DONE 0.0s
I1204 12:03:06.259261    2980 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-306000 /var/lib/minikube/build/build.254238997: (1.574913625s)
I1204 12:03:06.259347    2980 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.254238997
I1204 12:03:06.263896    2980 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.254238997.tar
I1204 12:03:06.267138    2980 build_images.go:217] Built localhost/my-image:functional-306000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.254238997.tar
I1204 12:03:06.267153    2980 build_images.go:133] succeeded building to: functional-306000
I1204 12:03:06.267157    2980 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.939011708s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-306000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-306000 docker-env) && out/minikube-darwin-arm64 status -p functional-306000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-306000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-306000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-306000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-306000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-306000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2725: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image load --daemon kicbase/echo-server:functional-306000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image load --daemon kicbase/echo-server:functional-306000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-306000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image load --daemon kicbase/echo-server:functional-306000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-306000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-306000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d18be338-61d4-4e2d-aae0-e1ca81165bd4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d18be338-61d4-4e2d-aae0-e1ca81165bd4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.005807667s
I1204 12:02:29.680611    1856 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image save kicbase/echo-server:functional-306000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image rm kicbase/echo-server:functional-306000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-306000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 image save --daemon kicbase/echo-server:functional-306000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-306000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-306000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.239.212 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1204 12:02:29.764330    1856 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1204 12:02:29.807809    1856 config.go:182] Loaded profile config "functional-306000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-306000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-306000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-306000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-vz2rs" [c84bba00-e47e-473e-b1bc-1fc337e8f3f8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-vz2rs" [c84bba00-e47e-473e-b1bc-1fc337e8f3f8] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003922583s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 service list -o json
functional_test.go:1494: Took "294.986417ms" to run "out/minikube-darwin-arm64 -p functional-306000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30123
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30123
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "103.186667ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "37.923709ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "105.675ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "38.71625ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3830650269/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733342575178937000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3830650269/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733342575178937000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3830650269/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733342575178937000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3830650269/001/test-1733342575178937000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.544959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 12:02:55.244055    1856 retry.go:31] will retry after 496.671482ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  4 20:02 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  4 20:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  4 20:02 test-1733342575178937000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh cat /mount-9p/test-1733342575178937000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-306000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [db2d25dd-2a09-4d2a-8619-380af88f773b] Pending
helpers_test.go:344: "busybox-mount" [db2d25dd-2a09-4d2a-8619-380af88f773b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [db2d25dd-2a09-4d2a-8619-380af88f773b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [db2d25dd-2a09-4d2a-8619-380af88f773b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002029584s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-306000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3830650269/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1497503367/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (72.069875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 12:03:00.476228    1856 retry.go:31] will retry after 447.771748ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1497503367/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-306000 ssh "sudo umount -f /mount-9p": exit status 1 (64.792458ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-306000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1497503367/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3650779954/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3650779954/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3650779954/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T" /mount1: exit status 1 (74.568625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 12:03:01.438671    1856 retry.go:31] will retry after 719.747748ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T" /mount2: exit status 1 (62.705458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 12:03:02.294237    1856 retry.go:31] will retry after 923.66171ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-306000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-306000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3650779954/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3650779954/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-306000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3650779954/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-306000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-306000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-306000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-990000 status --output json -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/CopyFile (0.03s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-903000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-903000 --driver=qemu2 : (34.7992475s)
--- PASS: TestImageBuild/serial/Setup (34.80s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-903000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-903000: (1.377654375s)
--- PASS: TestImageBuild/serial/NormalBuild (1.38s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-903000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-903000
E1204 12:33:50.728088    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/addons-089000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.34s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-903000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (4.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-377000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-377000 --output=json --user=testUser: (4.879404792s)
--- PASS: TestJSONOutput/stop/Command (4.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-880000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-880000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (101.00275ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db1b5673-e30c-4f8c-b2b9-41f14642bb1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-880000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"93c5013a-aa4d-4116-b7cd-8e3121d3f4c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19985"}}
	{"specversion":"1.0","id":"65f47477-f5e6-4f87-a3cf-96811fd3c537","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig"}}
	{"specversion":"1.0","id":"1b4885f3-4300-42ae-8e7d-6d4f48f46be8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1afcd459-ef61-4850-a1da-43ef54f802ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bb425e1d-7ffa-4c9b-aee1-ed66d4b1d8d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube"}}
	{"specversion":"1.0","id":"e7400cbc-ee8c-435e-9224-1d06ff335f40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7f955dd2-ecd2-49e6-9f4c-e484219332df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-880000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-880000
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (71.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-939000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-939000 --driver=qemu2 : (33.806057083s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-940000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-940000 --driver=qemu2 : (37.190164416s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-939000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-940000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-940000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-940000
helpers_test.go:175: Cleaning up "first-939000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-939000
--- PASS: TestMinikubeProfile (71.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-051000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-051000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (108.703958ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-051000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19985
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19985-1334/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19985-1334/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-051000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-051000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.532667ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-051000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.78384825s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
E1204 13:02:20.635513    1856 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19985-1334/.minikube/profiles/functional-306000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.688405417s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-051000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-051000: (3.587560084s)
--- PASS: TestNoKubernetes/serial/Stop (3.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-051000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-051000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.716708ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-051000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-051000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-827000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-570000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-570000 --alsologtostderr -v=3: (1.812653792s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-570000 -n old-k8s-version-570000: exit status 7 (36.960208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-570000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-676000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-676000 --alsologtostderr -v=3: (3.366707292s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-676000 -n no-preload-676000: exit status 7 (66.535958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-676000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-467000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-467000 --alsologtostderr -v=3: (2.771997958s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-467000 -n embed-certs-467000: exit status 7 (64.488167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-467000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-596000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-596000 --alsologtostderr -v=3: (3.641412916s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-596000 -n default-k8s-diff-port-596000: exit status 7 (63.365417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-596000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-132000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-132000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-132000 --alsologtostderr -v=3: (3.539390375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-132000 -n newest-cni-132000: exit status 7 (70.533542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-132000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-395000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-395000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-395000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-395000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-395000"

                                                
                                                
----------------------- debugLogs end: cilium-395000 [took: 2.352085209s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-395000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-395000
--- SKIP: TestNetworkPlugins/group/cilium (2.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-466000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-466000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard